00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 3390 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3001 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.090 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.091 using credential 00000000-0000-0000-0000-000000000002 00:00:00.092 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.129 Fetching changes from the remote Git repository 00:00:00.131 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.166 Using shallow fetch with depth 1 00:00:00.166 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.166 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.193 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.194 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.872 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.883 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.897 Checking out Revision 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 (FETCH_HEAD) 00:00:05.897 > git config core.sparsecheckout # timeout=10 00:00:05.909 > git read-tree -mu HEAD # timeout=10 00:00:05.924 > git checkout -f 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=5 00:00:05.939 Commit message: "perf/upload_to_db: update columns after changes in get_results.sh" 00:00:05.939 > git rev-list --no-walk 3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338 # timeout=10 00:00:06.052 [Pipeline] Start of Pipeline 00:00:06.064 [Pipeline] library 00:00:06.066 Loading library shm_lib@master 00:00:06.899 Library shm_lib@master is cached. Copying from home. 00:00:06.917 [Pipeline] node 00:00:06.978 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:06.979 [Pipeline] { 00:00:06.993 [Pipeline] catchError 00:00:06.995 [Pipeline] { 00:00:07.010 [Pipeline] wrap 00:00:07.018 [Pipeline] { 00:00:07.025 [Pipeline] stage 00:00:07.027 [Pipeline] { (Prologue) 00:00:07.043 [Pipeline] echo 00:00:07.044 Node: VM-host-SM9 00:00:07.049 [Pipeline] cleanWs 00:00:07.056 [WS-CLEANUP] Deleting project workspace... 00:00:07.056 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.061 [WS-CLEANUP] done 00:00:07.198 [Pipeline] setCustomBuildProperty 00:00:07.276 [Pipeline] nodesByLabel 00:00:07.277 Found a total of 1 nodes with the 'sorcerer' label 00:00:07.285 [Pipeline] httpRequest 00:00:07.290 HttpMethod: GET 00:00:07.290 URL: http://10.211.164.101/packages/jbp_3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338.tar.gz 00:00:07.299 Sending request to url: http://10.211.164.101/packages/jbp_3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338.tar.gz 00:00:07.314 Response Code: HTTP/1.1 200 OK 00:00:07.314 Success: Status code 200 is in the accepted range: 200,404 00:00:07.314 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338.tar.gz 00:00:11.881 [Pipeline] sh 00:00:12.161 + tar --no-same-owner -xf jbp_3fbc5c0ceee15b3cc82c7e28355dfd4637aa6338.tar.gz 00:00:12.180 [Pipeline] httpRequest 00:00:12.184 HttpMethod: GET 00:00:12.185 URL: http://10.211.164.101/packages/spdk_a1264177cd264beaac27476984a68dceee651050.tar.gz 00:00:12.185 Sending request to url: http://10.211.164.101/packages/spdk_a1264177cd264beaac27476984a68dceee651050.tar.gz 00:00:12.200 Response Code: HTTP/1.1 200 OK 00:00:12.201 Success: Status code 200 is in the accepted range: 200,404 00:00:12.201 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_a1264177cd264beaac27476984a68dceee651050.tar.gz 00:01:26.757 [Pipeline] sh 00:01:27.037 + tar --no-same-owner -xf spdk_a1264177cd264beaac27476984a68dceee651050.tar.gz 00:01:29.582 [Pipeline] sh 00:01:29.863 + git -C spdk log --oneline -n5 00:01:29.863 a1264177c pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:01:29.863 af95268b1 pkgdep/git: Adjust QAT driver to kernel >= 6.8.x 00:01:29.863 5e75b9137 scripts/pkgdep: Simplify mdl installation 00:01:29.863 ba909a45b lib/iscsi: add rpc method iscsi_get_histogram 00:01:29.863 90ba272ce lib/iscsi: add rpc method iscsi_enable_histogram 00:01:29.884 [Pipeline] withCredentials 00:01:29.894 > git --version # timeout=10 00:01:29.906 > git --version # 'git version 2.39.2' 00:01:29.922 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:29.924 [Pipeline] { 00:01:29.933 [Pipeline] retry 00:01:29.935 [Pipeline] { 00:01:29.973 [Pipeline] sh 00:01:30.253 + git ls-remote http://dpdk.org/git/dpdk main 00:01:30.266 [Pipeline] } 00:01:30.288 [Pipeline] // retry 00:01:30.294 [Pipeline] } 00:01:30.314 [Pipeline] // withCredentials 00:01:30.327 [Pipeline] httpRequest 00:01:30.331 HttpMethod: GET 00:01:30.331 URL: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:30.336 Sending request to url: http://10.211.164.101/packages/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:30.339 Response Code: HTTP/1.1 200 OK 00:01:30.339 Success: Status code 200 is in the accepted range: 200,404 00:01:30.340 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:32.709 [Pipeline] sh 00:01:32.987 + tar --no-same-owner -xf dpdk_7e06c0de1952d3109a5b0c4779d7e7d8059c9d78.tar.gz 00:01:34.374 [Pipeline] sh 00:01:34.652 + git -C dpdk log --oneline -n5 00:01:34.652 7e06c0de19 examples: move alignment attribute on types for MSVC 00:01:34.652 27595cd830 drivers: move alignment attribute on types for MSVC 00:01:34.652 0efea35a2b app: move alignment attribute on types for MSVC 00:01:34.652 e2e546ab5b version: 24.07-rc0 00:01:34.652 a9778aad62 version: 24.03.0 00:01:34.668 [Pipeline] writeFile 00:01:34.681 [Pipeline] sh 00:01:34.960 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:34.972 [Pipeline] sh 00:01:35.250 + cat autorun-spdk.conf 00:01:35.250 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.250 SPDK_TEST_NVMF=1 00:01:35.250 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.250 SPDK_TEST_URING=1 00:01:35.250 SPDK_TEST_USDT=1 00:01:35.250 SPDK_RUN_UBSAN=1 00:01:35.250 NET_TYPE=virt 00:01:35.250 SPDK_TEST_NATIVE_DPDK=main 00:01:35.250 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.250 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.257 RUN_NIGHTLY=1 00:01:35.259 [Pipeline] } 00:01:35.274 [Pipeline] // stage 00:01:35.287 [Pipeline] stage 00:01:35.289 [Pipeline] { (Run VM) 00:01:35.302 [Pipeline] sh 00:01:35.581 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:35.581 + echo 'Start stage prepare_nvme.sh' 00:01:35.581 Start stage prepare_nvme.sh 00:01:35.581 + [[ -n 0 ]] 00:01:35.581 + disk_prefix=ex0 00:01:35.581 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:35.581 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:35.581 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:35.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.581 ++ SPDK_TEST_NVMF=1 00:01:35.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.581 ++ SPDK_TEST_URING=1 00:01:35.581 ++ SPDK_TEST_USDT=1 00:01:35.581 ++ SPDK_RUN_UBSAN=1 00:01:35.581 ++ NET_TYPE=virt 00:01:35.581 ++ SPDK_TEST_NATIVE_DPDK=main 00:01:35.581 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.581 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.581 ++ RUN_NIGHTLY=1 00:01:35.581 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:35.581 + nvme_files=() 00:01:35.581 + declare -A nvme_files 00:01:35.581 + backend_dir=/var/lib/libvirt/images/backends 00:01:35.581 + nvme_files['nvme.img']=5G 00:01:35.581 + nvme_files['nvme-cmb.img']=5G 00:01:35.581 + nvme_files['nvme-multi0.img']=4G 00:01:35.581 + nvme_files['nvme-multi1.img']=4G 00:01:35.581 + nvme_files['nvme-multi2.img']=4G 00:01:35.581 + nvme_files['nvme-openstack.img']=8G 00:01:35.581 + nvme_files['nvme-zns.img']=5G 00:01:35.581 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:35.581 + (( SPDK_TEST_FTL == 1 )) 00:01:35.581 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:35.581 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:01:35.581 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:01:35.581 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:01:35.581 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:01:35.581 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:01:35.581 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.581 + for nvme in "${!nvme_files[@]}" 00:01:35.581 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:01:35.840 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.840 + for nvme in "${!nvme_files[@]}" 00:01:35.840 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:01:35.840 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.840 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:01:35.840 + echo 'End stage prepare_nvme.sh' 00:01:35.840 End stage prepare_nvme.sh 00:01:35.851 [Pipeline] sh 00:01:36.129 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.129 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -H -a -v -f fedora38 00:01:36.388 00:01:36.388 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:36.388 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:36.388 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.388 HELP=0 00:01:36.388 DRY_RUN=0 00:01:36.388 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img, 00:01:36.388 NVME_DISKS_TYPE=nvme,nvme, 00:01:36.388 NVME_AUTO_CREATE=0 00:01:36.388 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img, 00:01:36.388 NVME_CMB=,, 00:01:36.388 NVME_PMR=,, 00:01:36.388 NVME_ZNS=,, 00:01:36.388 NVME_MS=,, 00:01:36.388 NVME_FDP=,, 00:01:36.388 SPDK_VAGRANT_DISTRO=fedora38 00:01:36.388 SPDK_VAGRANT_VMCPU=10 00:01:36.388 SPDK_VAGRANT_VMRAM=12288 00:01:36.388 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.388 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:36.388 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.388 SPDK_OPENSTACK_NETWORK=0 00:01:36.388 VAGRANT_PACKAGE_BOX=0 00:01:36.388 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:36.388 FORCE_DISTRO=true 00:01:36.388 VAGRANT_BOX_VERSION= 00:01:36.388 EXTRA_VAGRANTFILES= 00:01:36.388 NIC_MODEL=e1000 00:01:36.388 00:01:36.388 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:36.388 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:39.684 Bringing machine 'default' up with 'libvirt' provider... 00:01:39.943 ==> default: Creating image (snapshot of base box volume). 00:01:39.943 ==> default: Creating domain with the following settings... 00:01:39.943 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1713840378_1524df7cd6f31c9e3ee8 00:01:39.943 ==> default: -- Domain type: kvm 00:01:39.943 ==> default: -- Cpus: 10 00:01:39.943 ==> default: -- Feature: acpi 00:01:39.943 ==> default: -- Feature: apic 00:01:39.943 ==> default: -- Feature: pae 00:01:39.943 ==> default: -- Memory: 12288M 00:01:39.943 ==> default: -- Memory Backing: hugepages: 00:01:39.943 ==> default: -- Management MAC: 00:01:39.943 ==> default: -- Loader: 00:01:39.943 ==> default: -- Nvram: 00:01:39.943 ==> default: -- Base box: spdk/fedora38 00:01:39.943 ==> default: -- Storage pool: default 00:01:39.943 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1713840378_1524df7cd6f31c9e3ee8.img (20G) 00:01:39.943 ==> default: -- Volume Cache: default 00:01:39.943 ==> default: -- Kernel: 00:01:39.943 ==> default: -- Initrd: 00:01:39.943 ==> default: -- Graphics Type: vnc 00:01:39.943 ==> default: -- Graphics Port: -1 00:01:39.943 ==> default: -- Graphics IP: 127.0.0.1 00:01:39.943 ==> default: -- Graphics Password: Not defined 00:01:39.943 ==> default: -- Video Type: cirrus 00:01:39.943 ==> default: -- Video VRAM: 9216 00:01:39.943 ==> default: -- Sound Type: 00:01:39.943 ==> default: -- Keymap: en-us 00:01:39.943 ==> default: -- TPM Path: 00:01:39.944 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:39.944 ==> default: -- Command line args: 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:39.944 ==> default: -> value=-drive, 00:01:39.944 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-0-drive0, 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:39.944 ==> default: -> value=-drive, 00:01:39.944 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.944 ==> default: -> value=-drive, 00:01:39.944 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.944 ==> default: -> value=-drive, 00:01:39.944 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:39.944 ==> default: -> value=-device, 00:01:39.944 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.944 ==> default: Creating shared folders metadata... 00:01:39.944 ==> default: Starting domain. 00:01:41.323 ==> default: Waiting for domain to get an IP address... 00:01:59.444 ==> default: Waiting for SSH to become available... 00:02:00.382 ==> default: Configuring and enabling network interfaces... 00:02:04.573 default: SSH address: 192.168.121.212:22 00:02:04.573 default: SSH username: vagrant 00:02:04.573 default: SSH auth method: private key 00:02:07.110 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.670 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:20.243 ==> default: Mounting SSHFS shared folder... 00:02:21.179 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:21.179 ==> default: Checking Mount.. 00:02:22.558 ==> default: Folder Successfully Mounted! 00:02:22.558 ==> default: Running provisioner: file... 00:02:23.496 default: ~/.gitconfig => .gitconfig 00:02:23.756 00:02:23.756 SUCCESS! 00:02:23.756 00:02:23.756 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:23.756 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:23.756 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:23.756 00:02:23.765 [Pipeline] } 00:02:23.782 [Pipeline] // stage 00:02:23.792 [Pipeline] dir 00:02:23.792 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:23.794 [Pipeline] { 00:02:23.807 [Pipeline] catchError 00:02:23.809 [Pipeline] { 00:02:23.824 [Pipeline] sh 00:02:24.103 + vagrant ssh-config --host vagrant 00:02:24.103 + sed -ne /^Host/,$p 00:02:24.103 + tee ssh_conf 00:02:27.390 Host vagrant 00:02:27.390 HostName 192.168.121.212 00:02:27.390 User vagrant 00:02:27.390 Port 22 00:02:27.390 UserKnownHostsFile /dev/null 00:02:27.390 StrictHostKeyChecking no 00:02:27.390 PasswordAuthentication no 00:02:27.390 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:02:27.390 IdentitiesOnly yes 00:02:27.390 LogLevel FATAL 00:02:27.390 ForwardAgent yes 00:02:27.390 ForwardX11 yes 00:02:27.390 00:02:27.403 [Pipeline] withEnv 00:02:27.405 [Pipeline] { 00:02:27.419 [Pipeline] sh 00:02:27.699 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:27.699 source /etc/os-release 00:02:27.699 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.699 # Minimal, systemd-like check. 00:02:27.699 if [[ -e /.dockerenv ]]; then 00:02:27.699 # Clear garbage from the node's name: 00:02:27.699 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.699 # $HOSTNAME is the actual container id 00:02:27.699 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.699 if mountpoint -q /etc/hostname; then 00:02:27.699 # We can assume this is a mount from a host where container is running, 00:02:27.699 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.699 container="$(< /etc/hostname) ($agent)" 00:02:27.699 else 00:02:27.699 # Fallback 00:02:27.699 container=$agent 00:02:27.699 fi 00:02:27.699 fi 00:02:27.699 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.699 00:02:27.970 [Pipeline] } 00:02:27.989 [Pipeline] // withEnv 00:02:27.998 [Pipeline] setCustomBuildProperty 00:02:28.013 [Pipeline] stage 00:02:28.015 [Pipeline] { (Tests) 00:02:28.034 [Pipeline] sh 00:02:28.315 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:28.588 [Pipeline] timeout 00:02:28.588 Timeout set to expire in 30 min 00:02:28.590 [Pipeline] { 00:02:28.606 [Pipeline] sh 00:02:28.886 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:29.455 HEAD is now at a1264177c pkgdep/git: Adjust ICE driver to kernel >= 6.8.x 00:02:29.467 [Pipeline] sh 00:02:29.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:30.022 [Pipeline] sh 00:02:30.301 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:30.575 [Pipeline] sh 00:02:30.854 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:02:31.112 ++ readlink -f spdk_repo 00:02:31.112 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.112 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.112 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.112 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.112 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.112 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.112 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.112 + cd /home/vagrant/spdk_repo 00:02:31.112 + source /etc/os-release 00:02:31.112 ++ NAME='Fedora Linux' 00:02:31.112 ++ VERSION='38 (Cloud Edition)' 00:02:31.112 ++ ID=fedora 00:02:31.112 ++ VERSION_ID=38 00:02:31.112 ++ VERSION_CODENAME= 00:02:31.112 ++ PLATFORM_ID=platform:f38 00:02:31.112 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:31.112 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.112 ++ LOGO=fedora-logo-icon 00:02:31.112 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:31.112 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.112 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:31.112 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.112 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.112 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.112 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:31.112 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.112 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:31.112 ++ SUPPORT_END=2024-05-14 00:02:31.112 ++ VARIANT='Cloud Edition' 00:02:31.112 ++ VARIANT_ID=cloud 00:02:31.112 + uname -a 00:02:31.112 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:31.112 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:31.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:31.370 Hugepages 00:02:31.370 node hugesize free / total 00:02:31.370 node0 1048576kB 0 / 0 00:02:31.370 node0 2048kB 0 / 0 00:02:31.370 00:02:31.370 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.629 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:31.629 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:31.629 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:31.629 + rm -f /tmp/spdk-ld-path 00:02:31.629 + source autorun-spdk.conf 00:02:31.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.629 ++ SPDK_TEST_NVMF=1 00:02:31.629 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.629 ++ SPDK_TEST_URING=1 00:02:31.629 ++ SPDK_TEST_USDT=1 00:02:31.629 ++ SPDK_RUN_UBSAN=1 00:02:31.629 ++ NET_TYPE=virt 00:02:31.629 ++ SPDK_TEST_NATIVE_DPDK=main 00:02:31.629 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.629 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.629 ++ RUN_NIGHTLY=1 00:02:31.629 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.629 + [[ -n '' ]] 00:02:31.629 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:31.629 + for M in /var/spdk/build-*-manifest.txt 00:02:31.629 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:31.629 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.629 + for M in /var/spdk/build-*-manifest.txt 00:02:31.629 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:31.629 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:31.629 ++ uname 00:02:31.629 + [[ Linux == \L\i\n\u\x ]] 00:02:31.629 + sudo dmesg -T 00:02:31.629 + sudo dmesg --clear 00:02:31.629 + dmesg_pid=5889 00:02:31.629 + [[ Fedora Linux == FreeBSD ]] 00:02:31.629 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.629 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.629 + sudo dmesg -Tw 00:02:31.629 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.629 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.629 + export FIO_BIN=/usr/src/fio-static/fio 00:02:31.629 + FIO_BIN=/usr/src/fio-static/fio 00:02:31.629 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.629 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.629 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.629 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.629 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.629 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.629 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.629 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.629 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:31.629 Test configuration: 00:02:31.629 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.629 SPDK_TEST_NVMF=1 00:02:31.629 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.629 SPDK_TEST_URING=1 00:02:31.629 SPDK_TEST_USDT=1 00:02:31.629 SPDK_RUN_UBSAN=1 00:02:31.629 NET_TYPE=virt 00:02:31.629 SPDK_TEST_NATIVE_DPDK=main 00:02:31.629 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.629 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.887 RUN_NIGHTLY=1 02:47:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:31.887 02:47:10 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:31.887 02:47:10 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.887 02:47:10 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.887 02:47:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.887 02:47:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.887 02:47:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.887 02:47:10 -- paths/export.sh@5 -- $ export PATH 00:02:31.888 02:47:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.888 02:47:10 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:31.888 02:47:10 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:31.888 02:47:10 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713840430.XXXXXX 00:02:31.888 02:47:10 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713840430.jrC7mS 00:02:31.888 02:47:10 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:31.888 02:47:10 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:02:31.888 02:47:10 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:31.888 02:47:10 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:31.888 02:47:10 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:31.888 02:47:10 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:31.888 02:47:10 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:31.888 02:47:10 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:02:31.888 02:47:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.888 02:47:10 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:31.888 02:47:10 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:02:31.888 02:47:10 -- pm/common@17 -- $ local monitor 00:02:31.888 02:47:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.888 02:47:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5925 00:02:31.888 02:47:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.888 02:47:10 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5927 00:02:31.888 02:47:10 -- pm/common@26 -- $ sleep 1 00:02:31.888 02:47:10 -- pm/common@21 -- $ date +%s 00:02:31.888 02:47:10 -- pm/common@21 -- $ date +%s 00:02:31.888 02:47:10 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713840430 00:02:31.888 02:47:10 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1713840430 00:02:31.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713840430_collect-vmstat.pm.log 00:02:31.888 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1713840430_collect-cpu-load.pm.log 00:02:32.821 02:47:11 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:02:32.821 02:47:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.821 02:47:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.821 02:47:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.821 02:47:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.821 Tue Apr 23 02:47:11 AM UTC 2024 00:02:32.821 02:47:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.821 v24.05-pre-435-ga1264177c 00:02:32.821 02:47:11 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:32.821 02:47:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.821 02:47:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.821 02:47:11 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:32.821 02:47:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:32.821 02:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.821 ************************************ 00:02:32.821 START TEST ubsan 00:02:32.821 ************************************ 00:02:32.821 using ubsan 00:02:32.821 02:47:11 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:02:32.821 00:02:32.821 real 0m0.000s 00:02:32.821 user 0m0.000s 00:02:32.821 sys 0m0.000s 00:02:32.821 02:47:11 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:02:32.821 ************************************ 00:02:32.821 END TEST ubsan 00:02:32.821 ************************************ 00:02:32.821 02:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.081 02:47:11 -- spdk/autobuild.sh@27 -- $ '[' -n main ']' 00:02:33.081 02:47:11 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:33.081 02:47:11 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:33.081 02:47:11 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:02:33.081 02:47:11 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:33.081 02:47:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:33.081 ************************************ 00:02:33.081 START TEST build_native_dpdk 00:02:33.081 ************************************ 00:02:33.081 02:47:12 -- common/autotest_common.sh@1111 -- $ _build_native_dpdk 00:02:33.081 02:47:12 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:33.081 02:47:12 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:33.081 02:47:12 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:33.081 02:47:12 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:33.081 02:47:12 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:33.081 02:47:12 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:33.081 02:47:12 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:33.081 02:47:12 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:33.081 02:47:12 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:33.081 02:47:12 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:33.081 02:47:12 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:33.081 02:47:12 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:33.081 02:47:12 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:33.081 02:47:12 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:33.081 02:47:12 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:33.081 02:47:12 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:33.081 02:47:12 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:33.081 7e06c0de19 examples: move alignment attribute on types for MSVC 00:02:33.081 27595cd830 drivers: move alignment attribute on types for MSVC 00:02:33.081 0efea35a2b app: move alignment attribute on types for MSVC 00:02:33.081 e2e546ab5b version: 24.07-rc0 00:02:33.081 a9778aad62 version: 24.03.0 00:02:33.081 02:47:12 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:33.081 02:47:12 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:33.081 02:47:12 -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.07.0-rc0 00:02:33.081 02:47:12 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:33.081 02:47:12 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:33.081 02:47:12 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:33.081 02:47:12 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:33.081 02:47:12 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:33.081 02:47:12 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:33.081 02:47:12 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:33.081 02:47:12 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:33.081 02:47:12 -- common/autobuild_common.sh@169 -- $ lt 24.07.0-rc0 21.11.0 00:02:33.081 02:47:12 -- scripts/common.sh@370 -- $ cmp_versions 24.07.0-rc0 '<' 21.11.0 00:02:33.081 02:47:12 -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:02:33.081 02:47:12 -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:02:33.081 02:47:12 -- scripts/common.sh@333 -- $ IFS=.-: 00:02:33.081 02:47:12 -- scripts/common.sh@333 -- $ read -ra ver1 00:02:33.081 02:47:12 -- scripts/common.sh@334 -- $ IFS=.-: 00:02:33.081 02:47:12 -- scripts/common.sh@334 -- $ read -ra ver2 00:02:33.081 02:47:12 -- scripts/common.sh@335 -- $ local 'op=<' 00:02:33.081 02:47:12 -- scripts/common.sh@337 -- $ ver1_l=4 00:02:33.081 02:47:12 -- scripts/common.sh@338 -- $ ver2_l=3 00:02:33.081 02:47:12 -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:02:33.081 02:47:12 -- scripts/common.sh@341 -- $ case "$op" in 00:02:33.081 02:47:12 -- scripts/common.sh@342 -- $ : 1 00:02:33.081 02:47:12 -- scripts/common.sh@361 -- $ (( v = 0 )) 00:02:33.081 02:47:12 -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:33.081 02:47:12 -- scripts/common.sh@362 -- $ decimal 24 00:02:33.081 02:47:12 -- scripts/common.sh@350 -- $ local d=24 00:02:33.081 02:47:12 -- scripts/common.sh@351 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:33.081 02:47:12 -- scripts/common.sh@352 -- $ echo 24 00:02:33.081 02:47:12 -- scripts/common.sh@362 -- $ ver1[v]=24 00:02:33.081 02:47:12 -- scripts/common.sh@363 -- $ decimal 21 00:02:33.081 02:47:12 -- scripts/common.sh@350 -- $ local d=21 00:02:33.081 02:47:12 -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:33.081 02:47:12 -- scripts/common.sh@352 -- $ echo 21 00:02:33.081 02:47:12 -- scripts/common.sh@363 -- $ ver2[v]=21 00:02:33.081 02:47:12 -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:02:33.081 02:47:12 -- scripts/common.sh@364 -- $ return 1 00:02:33.081 02:47:12 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:33.081 patching file config/rte_config.h 00:02:33.081 Hunk #1 succeeded at 70 (offset 11 lines). 00:02:33.081 02:47:12 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:33.081 02:47:12 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:33.081 02:47:12 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:33.081 02:47:12 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:33.081 02:47:12 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:38.356 The Meson build system 00:02:38.356 Version: 1.3.1 00:02:38.356 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:38.356 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:38.356 Build type: native build 00:02:38.356 Program cat found: YES (/usr/bin/cat) 00:02:38.356 Project name: DPDK 00:02:38.356 Project version: 24.07.0-rc0 00:02:38.356 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:38.356 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:38.356 Host machine cpu family: x86_64 00:02:38.356 Host machine cpu: x86_64 00:02:38.356 Message: ## Building in Developer Mode ## 00:02:38.356 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:38.356 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:38.356 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:38.356 Program python3 found: YES (/usr/bin/python3) 00:02:38.356 Program cat found: YES (/usr/bin/cat) 00:02:38.356 config/meson.build:120: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:38.356 Compiler for C supports arguments -march=native: YES 00:02:38.356 Checking for size of "void *" : 8 00:02:38.356 Checking for size of "void *" : 8 (cached) 00:02:38.356 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:38.356 Library m found: YES 00:02:38.356 Library numa found: YES 00:02:38.356 Has header "numaif.h" : YES 00:02:38.356 Library fdt found: NO 00:02:38.356 Library execinfo found: NO 00:02:38.356 Has header "execinfo.h" : YES 00:02:38.356 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:38.356 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:38.356 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:38.356 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:38.356 Run-time dependency openssl found: YES 3.0.9 00:02:38.356 Run-time dependency libpcap found: YES 1.10.4 00:02:38.356 Has header "pcap.h" with dependency libpcap: YES 00:02:38.356 Compiler for C supports arguments -Wcast-qual: YES 00:02:38.356 Compiler for C supports arguments -Wdeprecated: YES 00:02:38.356 Compiler for C supports arguments -Wformat: YES 00:02:38.356 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:38.356 Compiler for C supports arguments -Wformat-security: NO 00:02:38.356 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:38.356 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:38.356 Compiler for C supports arguments -Wnested-externs: YES 00:02:38.356 Compiler for C supports arguments -Wold-style-definition: YES 00:02:38.356 Compiler for C supports arguments -Wpointer-arith: YES 00:02:38.356 Compiler for C supports arguments -Wsign-compare: YES 00:02:38.356 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:38.356 Compiler for C supports arguments -Wundef: YES 00:02:38.356 Compiler for C supports arguments -Wwrite-strings: YES 00:02:38.356 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:38.356 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:38.356 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:38.356 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:38.356 Program objdump found: YES (/usr/bin/objdump) 00:02:38.356 Compiler for C supports arguments -mavx512f: YES 00:02:38.356 Checking if "AVX512 checking" compiles: YES 00:02:38.356 Fetching value of define "__SSE4_2__" : 1 00:02:38.356 Fetching value of define "__AES__" : 1 00:02:38.356 Fetching value of define "__AVX__" : 1 00:02:38.356 Fetching value of define "__AVX2__" : 1 00:02:38.356 Fetching value of define "__AVX512BW__" : (undefined) 00:02:38.356 Fetching value of define "__AVX512CD__" : (undefined) 00:02:38.356 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:38.356 Fetching value of define "__AVX512F__" : (undefined) 00:02:38.356 Fetching value of define "__AVX512VL__" : (undefined) 00:02:38.356 Fetching value of define "__PCLMUL__" : 1 00:02:38.356 Fetching value of define "__RDRND__" : 1 00:02:38.356 Fetching value of define "__RDSEED__" : 1 00:02:38.356 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:38.356 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:38.356 Message: lib/log: Defining dependency "log" 00:02:38.356 Message: lib/kvargs: Defining dependency "kvargs" 00:02:38.356 Message: lib/argparse: Defining dependency "argparse" 00:02:38.356 Message: lib/telemetry: Defining dependency "telemetry" 00:02:38.356 Checking for function "getentropy" : NO 00:02:38.356 Message: lib/eal: Defining dependency "eal" 00:02:38.356 Message: lib/ring: Defining dependency "ring" 00:02:38.356 Message: lib/rcu: Defining dependency "rcu" 00:02:38.356 Message: lib/mempool: Defining dependency "mempool" 00:02:38.356 Message: lib/mbuf: Defining dependency "mbuf" 00:02:38.356 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:38.356 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.356 Compiler for C supports arguments -mpclmul: YES 00:02:38.356 Compiler for C supports arguments -maes: YES 00:02:38.356 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:38.356 Compiler for C supports arguments -mavx512bw: YES 00:02:38.356 Compiler for C supports arguments -mavx512dq: YES 00:02:38.356 Compiler for C supports arguments -mavx512vl: YES 00:02:38.356 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:38.356 Compiler for C supports arguments -mavx2: YES 00:02:38.356 Compiler for C supports arguments -mavx: YES 00:02:38.356 Message: lib/net: Defining dependency "net" 00:02:38.356 Message: lib/meter: Defining dependency "meter" 00:02:38.356 Message: lib/ethdev: Defining dependency "ethdev" 00:02:38.356 Message: lib/pci: Defining dependency "pci" 00:02:38.356 Message: lib/cmdline: Defining dependency "cmdline" 00:02:38.356 Message: lib/metrics: Defining dependency "metrics" 00:02:38.356 Message: lib/hash: Defining dependency "hash" 00:02:38.356 Message: lib/timer: Defining dependency "timer" 00:02:38.356 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.356 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:38.356 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:38.356 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:38.356 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:38.356 Message: lib/acl: Defining dependency "acl" 00:02:38.356 Message: lib/bbdev: Defining dependency "bbdev" 00:02:38.356 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:38.356 Run-time dependency libelf found: YES 0.190 00:02:38.356 Message: lib/bpf: Defining dependency "bpf" 00:02:38.356 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:38.356 Message: lib/compressdev: Defining dependency "compressdev" 00:02:38.356 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:38.356 Message: lib/distributor: Defining dependency "distributor" 00:02:38.356 Message: lib/dmadev: Defining dependency "dmadev" 00:02:38.356 Message: lib/efd: Defining dependency "efd" 00:02:38.356 Message: lib/eventdev: Defining dependency "eventdev" 00:02:38.356 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:38.356 Message: lib/gpudev: Defining dependency "gpudev" 00:02:38.356 Message: lib/gro: Defining dependency "gro" 00:02:38.356 Message: lib/gso: Defining dependency "gso" 00:02:38.356 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:38.356 Message: lib/jobstats: Defining dependency "jobstats" 00:02:38.356 Message: lib/latencystats: Defining dependency "latencystats" 00:02:38.356 Message: lib/lpm: Defining dependency "lpm" 00:02:38.356 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.356 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:38.356 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:38.356 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:38.356 Message: lib/member: Defining dependency "member" 00:02:38.356 Message: lib/pcapng: Defining dependency "pcapng" 00:02:38.356 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:38.356 Message: lib/power: Defining dependency "power" 00:02:38.356 Message: lib/rawdev: Defining dependency "rawdev" 00:02:38.356 Message: lib/regexdev: Defining dependency "regexdev" 00:02:38.356 Message: lib/mldev: Defining dependency "mldev" 00:02:38.356 Message: lib/rib: Defining dependency "rib" 00:02:38.356 Message: lib/reorder: Defining dependency "reorder" 00:02:38.356 Message: lib/sched: Defining dependency "sched" 00:02:38.356 Message: lib/security: Defining dependency "security" 00:02:38.356 Message: lib/stack: Defining dependency "stack" 00:02:38.357 Has header "linux/userfaultfd.h" : YES 00:02:38.357 Has header "linux/vduse.h" : YES 00:02:38.357 Message: lib/vhost: Defining dependency "vhost" 00:02:38.357 Message: lib/ipsec: Defining dependency "ipsec" 00:02:38.357 Message: lib/pdcp: Defining dependency "pdcp" 00:02:38.357 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:38.357 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:38.357 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:38.357 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:38.357 Message: lib/fib: Defining dependency "fib" 00:02:38.357 Message: lib/port: Defining dependency "port" 00:02:38.357 Message: lib/pdump: Defining dependency "pdump" 00:02:38.357 Message: lib/table: Defining dependency "table" 00:02:38.357 Message: lib/pipeline: Defining dependency "pipeline" 00:02:38.357 Message: lib/graph: Defining dependency "graph" 00:02:38.357 Message: lib/node: Defining dependency "node" 00:02:38.357 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:38.357 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:38.357 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.834 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.834 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.834 Compiler for C supports arguments -Wno-unused-value: YES 00:02:39.834 Compiler for C supports arguments -Wno-format: YES 00:02:39.834 Compiler for C supports arguments -Wno-format-security: YES 00:02:39.834 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.834 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.834 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.834 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.834 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.834 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.834 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.834 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.834 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.834 Has header "sys/epoll.h" : YES 00:02:39.834 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.834 Configuring doxy-api-html.conf using configuration 00:02:39.834 Configuring doxy-api-man.conf using configuration 00:02:39.834 Program mandb found: YES (/usr/bin/mandb) 00:02:39.834 Program sphinx-build found: NO 00:02:39.834 Configuring rte_build_config.h using configuration 00:02:39.834 Message: 00:02:39.834 ================= 00:02:39.834 Applications Enabled 00:02:39.834 ================= 00:02:39.834 00:02:39.834 apps: 00:02:39.834 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:39.834 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:39.834 test-pmd, test-regex, test-sad, test-security-perf, 00:02:39.834 00:02:39.834 Message: 00:02:39.834 ================= 00:02:39.834 Libraries Enabled 00:02:39.834 ================= 00:02:39.834 00:02:39.834 libs: 00:02:39.834 log, kvargs, argparse, telemetry, eal, ring, rcu, mempool, 00:02:39.834 mbuf, net, meter, ethdev, pci, cmdline, metrics, hash, 00:02:39.834 timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, 00:02:39.834 distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, 00:02:39.834 ip_frag, jobstats, latencystats, lpm, member, pcapng, power, rawdev, 00:02:39.834 regexdev, mldev, rib, reorder, sched, security, stack, vhost, 00:02:39.834 ipsec, pdcp, fib, port, pdump, table, pipeline, graph, 00:02:39.834 node, 00:02:39.834 00:02:39.834 Message: 00:02:39.834 =============== 00:02:39.834 Drivers Enabled 00:02:39.834 =============== 00:02:39.834 00:02:39.834 common: 00:02:39.834 00:02:39.834 bus: 00:02:39.834 pci, vdev, 00:02:39.834 mempool: 00:02:39.834 ring, 00:02:39.834 dma: 00:02:39.834 00:02:39.834 net: 00:02:39.834 i40e, 00:02:39.834 raw: 00:02:39.834 00:02:39.834 crypto: 00:02:39.834 00:02:39.834 compress: 00:02:39.834 00:02:39.834 regex: 00:02:39.834 00:02:39.834 ml: 00:02:39.834 00:02:39.834 vdpa: 00:02:39.834 00:02:39.834 event: 00:02:39.834 00:02:39.834 baseband: 00:02:39.834 00:02:39.834 gpu: 00:02:39.834 00:02:39.834 00:02:39.834 Message: 00:02:39.834 ================= 00:02:39.834 Content Skipped 00:02:39.834 ================= 00:02:39.834 00:02:39.834 apps: 00:02:39.834 00:02:39.834 libs: 00:02:39.834 00:02:39.834 drivers: 00:02:39.834 common/cpt: not in enabled drivers build config 00:02:39.834 common/dpaax: not in enabled drivers build config 00:02:39.834 common/iavf: not in enabled drivers build config 00:02:39.834 common/idpf: not in enabled drivers build config 00:02:39.834 common/ionic: not in enabled drivers build config 00:02:39.834 common/mvep: not in enabled drivers build config 00:02:39.834 common/octeontx: not in enabled drivers build config 00:02:39.834 bus/auxiliary: not in enabled drivers build config 00:02:39.834 bus/cdx: not in enabled drivers build config 00:02:39.834 bus/dpaa: not in enabled drivers build config 00:02:39.834 bus/fslmc: not in enabled drivers build config 00:02:39.835 bus/ifpga: not in enabled drivers build config 00:02:39.835 bus/platform: not in enabled drivers build config 00:02:39.835 bus/uacce: not in enabled drivers build config 00:02:39.835 bus/vmbus: not in enabled drivers build config 00:02:39.835 common/cnxk: not in enabled drivers build config 00:02:39.835 common/mlx5: not in enabled drivers build config 00:02:39.835 common/nfp: not in enabled drivers build config 00:02:39.835 common/nitrox: not in enabled drivers build config 00:02:39.835 common/qat: not in enabled drivers build config 00:02:39.835 common/sfc_efx: not in enabled drivers build config 00:02:39.835 mempool/bucket: not in enabled drivers build config 00:02:39.835 mempool/cnxk: not in enabled drivers build config 00:02:39.835 mempool/dpaa: not in enabled drivers build config 00:02:39.835 mempool/dpaa2: not in enabled drivers build config 00:02:39.835 mempool/octeontx: not in enabled drivers build config 00:02:39.835 mempool/stack: not in enabled drivers build config 00:02:39.835 dma/cnxk: not in enabled drivers build config 00:02:39.835 dma/dpaa: not in enabled drivers build config 00:02:39.835 dma/dpaa2: not in enabled drivers build config 00:02:39.835 dma/hisilicon: not in enabled drivers build config 00:02:39.835 dma/idxd: not in enabled drivers build config 00:02:39.835 dma/ioat: not in enabled drivers build config 00:02:39.835 dma/skeleton: not in enabled drivers build config 00:02:39.835 net/af_packet: not in enabled drivers build config 00:02:39.835 net/af_xdp: not in enabled drivers build config 00:02:39.835 net/ark: not in enabled drivers build config 00:02:39.835 net/atlantic: not in enabled drivers build config 00:02:39.835 net/avp: not in enabled drivers build config 00:02:39.835 net/axgbe: not in enabled drivers build config 00:02:39.835 net/bnx2x: not in enabled drivers build config 00:02:39.835 net/bnxt: not in enabled drivers build config 00:02:39.835 net/bonding: not in enabled drivers build config 00:02:39.835 net/cnxk: not in enabled drivers build config 00:02:39.835 net/cpfl: not in enabled drivers build config 00:02:39.835 net/cxgbe: not in enabled drivers build config 00:02:39.835 net/dpaa: not in enabled drivers build config 00:02:39.835 net/dpaa2: not in enabled drivers build config 00:02:39.835 net/e1000: not in enabled drivers build config 00:02:39.835 net/ena: not in enabled drivers build config 00:02:39.835 net/enetc: not in enabled drivers build config 00:02:39.835 net/enetfec: not in enabled drivers build config 00:02:39.835 net/enic: not in enabled drivers build config 00:02:39.835 net/failsafe: not in enabled drivers build config 00:02:39.835 net/fm10k: not in enabled drivers build config 00:02:39.835 net/gve: not in enabled drivers build config 00:02:39.835 net/hinic: not in enabled drivers build config 00:02:39.835 net/hns3: not in enabled drivers build config 00:02:39.835 net/iavf: not in enabled drivers build config 00:02:39.835 net/ice: not in enabled drivers build config 00:02:39.835 net/idpf: not in enabled drivers build config 00:02:39.835 net/igc: not in enabled drivers build config 00:02:39.835 net/ionic: not in enabled drivers build config 00:02:39.835 net/ipn3ke: not in enabled drivers build config 00:02:39.835 net/ixgbe: not in enabled drivers build config 00:02:39.835 net/mana: not in enabled drivers build config 00:02:39.835 net/memif: not in enabled drivers build config 00:02:39.835 net/mlx4: not in enabled drivers build config 00:02:39.835 net/mlx5: not in enabled drivers build config 00:02:39.835 net/mvneta: not in enabled drivers build config 00:02:39.835 net/mvpp2: not in enabled drivers build config 00:02:39.835 net/netvsc: not in enabled drivers build config 00:02:39.835 net/nfb: not in enabled drivers build config 00:02:39.835 net/nfp: not in enabled drivers build config 00:02:39.835 net/ngbe: not in enabled drivers build config 00:02:39.835 net/null: not in enabled drivers build config 00:02:39.835 net/octeontx: not in enabled drivers build config 00:02:39.835 net/octeon_ep: not in enabled drivers build config 00:02:39.835 net/pcap: not in enabled drivers build config 00:02:39.835 net/pfe: not in enabled drivers build config 00:02:39.835 net/qede: not in enabled drivers build config 00:02:39.835 net/ring: not in enabled drivers build config 00:02:39.835 net/sfc: not in enabled drivers build config 00:02:39.835 net/softnic: not in enabled drivers build config 00:02:39.835 net/tap: not in enabled drivers build config 00:02:39.835 net/thunderx: not in enabled drivers build config 00:02:39.835 net/txgbe: not in enabled drivers build config 00:02:39.835 net/vdev_netvsc: not in enabled drivers build config 00:02:39.835 net/vhost: not in enabled drivers build config 00:02:39.835 net/virtio: not in enabled drivers build config 00:02:39.835 net/vmxnet3: not in enabled drivers build config 00:02:39.835 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.835 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.835 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.835 raw/ifpga: not in enabled drivers build config 00:02:39.835 raw/ntb: not in enabled drivers build config 00:02:39.835 raw/skeleton: not in enabled drivers build config 00:02:39.835 crypto/armv8: not in enabled drivers build config 00:02:39.835 crypto/bcmfs: not in enabled drivers build config 00:02:39.835 crypto/caam_jr: not in enabled drivers build config 00:02:39.835 crypto/ccp: not in enabled drivers build config 00:02:39.835 crypto/cnxk: not in enabled drivers build config 00:02:39.835 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.835 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.835 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.835 crypto/mlx5: not in enabled drivers build config 00:02:39.835 crypto/mvsam: not in enabled drivers build config 00:02:39.835 crypto/nitrox: not in enabled drivers build config 00:02:39.835 crypto/null: not in enabled drivers build config 00:02:39.835 crypto/octeontx: not in enabled drivers build config 00:02:39.835 crypto/openssl: not in enabled drivers build config 00:02:39.835 crypto/scheduler: not in enabled drivers build config 00:02:39.835 crypto/uadk: not in enabled drivers build config 00:02:39.835 crypto/virtio: not in enabled drivers build config 00:02:39.835 compress/isal: not in enabled drivers build config 00:02:39.835 compress/mlx5: not in enabled drivers build config 00:02:39.835 compress/nitrox: not in enabled drivers build config 00:02:39.835 compress/octeontx: not in enabled drivers build config 00:02:39.835 compress/zlib: not in enabled drivers build config 00:02:39.835 regex/mlx5: not in enabled drivers build config 00:02:39.835 regex/cn9k: not in enabled drivers build config 00:02:39.835 ml/cnxk: not in enabled drivers build config 00:02:39.835 vdpa/ifc: not in enabled drivers build config 00:02:39.835 vdpa/mlx5: not in enabled drivers build config 00:02:39.835 vdpa/nfp: not in enabled drivers build config 00:02:39.835 vdpa/sfc: not in enabled drivers build config 00:02:39.835 event/cnxk: not in enabled drivers build config 00:02:39.835 event/dlb2: not in enabled drivers build config 00:02:39.835 event/dpaa: not in enabled drivers build config 00:02:39.835 event/dpaa2: not in enabled drivers build config 00:02:39.835 event/dsw: not in enabled drivers build config 00:02:39.835 event/opdl: not in enabled drivers build config 00:02:39.835 event/skeleton: not in enabled drivers build config 00:02:39.835 event/sw: not in enabled drivers build config 00:02:39.835 event/octeontx: not in enabled drivers build config 00:02:39.835 baseband/acc: not in enabled drivers build config 00:02:39.835 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.835 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.835 baseband/la12xx: not in enabled drivers build config 00:02:39.835 baseband/null: not in enabled drivers build config 00:02:39.835 baseband/turbo_sw: not in enabled drivers build config 00:02:39.835 gpu/cuda: not in enabled drivers build config 00:02:39.835 00:02:39.835 00:02:39.835 Build targets in project: 224 00:02:39.835 00:02:39.835 DPDK 24.07.0-rc0 00:02:39.835 00:02:39.835 User defined options 00:02:39.835 libdir : lib 00:02:39.835 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.835 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.835 c_link_args : 00:02:39.835 enable_docs : false 00:02:39.835 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.835 enable_kmods : false 00:02:39.835 machine : native 00:02:39.835 tests : false 00:02:39.835 00:02:39.835 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.835 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.835 02:47:18 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.835 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:40.094 [1/722] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:40.094 [2/722] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.094 [3/722] Linking static target lib/librte_kvargs.a 00:02:40.094 [4/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.094 [5/722] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:40.094 [6/722] Linking static target lib/librte_log.a 00:02:40.352 [7/722] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o 00:02:40.352 [8/722] Linking static target lib/librte_argparse.a 00:02:40.352 [9/722] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.611 [10/722] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.611 [11/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.611 [12/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.611 [13/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.611 [14/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.611 [15/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.611 [16/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.611 [17/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.611 [18/722] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.611 [19/722] Linking target lib/librte_log.so.24.2 00:02:40.870 [20/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:41.128 [21/722] Generating symbol file lib/librte_log.so.24.2.p/librte_log.so.24.2.symbols 00:02:41.128 [22/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:41.128 [23/722] Linking target lib/librte_kvargs.so.24.2 00:02:41.128 [24/722] Linking target lib/librte_argparse.so.24.2 00:02:41.128 [25/722] Generating symbol file lib/librte_kvargs.so.24.2.p/librte_kvargs.so.24.2.symbols 00:02:41.128 [26/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:41.128 [27/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:41.128 [28/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:41.387 [29/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:41.387 [30/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:41.387 [31/722] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.387 [32/722] Linking static target lib/librte_telemetry.a 00:02:41.387 [33/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:41.387 [34/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.387 [35/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:41.645 [36/722] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.645 [37/722] Linking target lib/librte_telemetry.so.24.2 00:02:41.905 [38/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:41.905 [39/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:41.905 [40/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:41.905 [41/722] Generating symbol file lib/librte_telemetry.so.24.2.p/librte_telemetry.so.24.2.symbols 00:02:41.905 [42/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:41.905 [43/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:41.905 [44/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:41.905 [45/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:41.905 [46/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:41.905 [47/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:41.905 [48/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:42.164 [49/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:42.422 [50/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:42.422 [51/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:42.422 [52/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:42.681 [53/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:42.681 [54/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:42.681 [55/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:42.681 [56/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:42.681 [57/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:42.941 [58/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:42.941 [59/722] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:42.941 [60/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:43.200 [61/722] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.200 [62/722] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.200 [63/722] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.200 [64/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.200 [65/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.200 [66/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.200 [67/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.200 [68/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.458 [69/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.458 [70/722] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.717 [71/722] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.976 [72/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.976 [73/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.976 [74/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.976 [75/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.976 [76/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.976 [77/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.976 [78/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.976 [79/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.976 [80/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.237 [81/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.237 [82/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.237 [83/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.496 [84/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.496 [85/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.496 [86/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.754 [87/722] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.754 [88/722] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.754 [89/722] Linking static target lib/librte_ring.a 00:02:45.013 [90/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:45.013 [91/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:45.013 [92/722] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.013 [93/722] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:45.013 [94/722] Linking static target lib/librte_eal.a 00:02:45.271 [95/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:45.271 [96/722] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:45.271 [97/722] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:45.271 [98/722] Linking static target lib/librte_mempool.a 00:02:45.271 [99/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:45.530 [100/722] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:45.530 [101/722] Linking static target lib/librte_rcu.a 00:02:45.530 [102/722] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:45.530 [103/722] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:45.789 [104/722] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.789 [105/722] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:45.789 [106/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:45.789 [107/722] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.789 [108/722] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:46.048 [109/722] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.048 [110/722] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:46.048 [111/722] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:46.048 [112/722] Linking static target lib/librte_mbuf.a 00:02:46.306 [113/722] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:46.306 [114/722] Linking static target lib/librte_net.a 00:02:46.306 [115/722] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:46.306 [116/722] Linking static target lib/librte_meter.a 00:02:46.564 [117/722] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.564 [118/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.564 [119/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.564 [120/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:46.564 [121/722] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.564 [122/722] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.823 [123/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:47.389 [124/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.389 [125/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.647 [126/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:47.647 [127/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.647 [128/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.905 [129/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:47.906 [130/722] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:47.906 [131/722] Linking static target lib/librte_pci.a 00:02:47.906 [132/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:47.906 [133/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.906 [134/722] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.164 [135/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.164 [136/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.164 [137/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.164 [138/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.164 [139/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.164 [140/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.164 [141/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.422 [142/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.422 [143/722] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:48.422 [144/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.422 [145/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.422 [146/722] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.422 [147/722] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:48.422 [148/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.680 [149/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:48.680 [150/722] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.680 [151/722] Linking static target lib/librte_cmdline.a 00:02:48.938 [152/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:48.938 [153/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:48.938 [154/722] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:48.938 [155/722] Linking static target lib/librte_metrics.a 00:02:49.196 [156/722] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.196 [157/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.454 [158/722] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.454 [159/722] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.454 [160/722] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.019 [161/722] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:50.019 [162/722] Linking static target lib/librte_timer.a 00:02:50.276 [163/722] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.276 [164/722] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:50.276 [165/722] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:50.534 [166/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:50.792 [167/722] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:51.056 [168/722] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:51.057 [169/722] Linking static target lib/librte_bitratestats.a 00:02:51.057 [170/722] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:51.057 [171/722] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:51.057 [172/722] Linking static target lib/librte_ethdev.a 00:02:51.320 [173/722] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:51.320 [174/722] Linking static target lib/librte_bbdev.a 00:02:51.320 [175/722] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.320 [176/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:51.320 [177/722] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:51.579 [178/722] Linking static target lib/librte_hash.a 00:02:51.579 [179/722] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.579 [180/722] Linking target lib/librte_eal.so.24.2 00:02:51.579 [181/722] Generating symbol file lib/librte_eal.so.24.2.p/librte_eal.so.24.2.symbols 00:02:51.837 [182/722] Linking target lib/librte_ring.so.24.2 00:02:51.837 [183/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:51.837 [184/722] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:51.837 [185/722] Linking target lib/librte_meter.so.24.2 00:02:51.837 [186/722] Linking target lib/librte_pci.so.24.2 00:02:51.837 [187/722] Generating symbol file lib/librte_ring.so.24.2.p/librte_ring.so.24.2.symbols 00:02:51.837 [188/722] Linking target lib/librte_rcu.so.24.2 00:02:51.837 [189/722] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.095 [190/722] Linking target lib/librte_mempool.so.24.2 00:02:52.095 [191/722] Generating symbol file lib/librte_meter.so.24.2.p/librte_meter.so.24.2.symbols 00:02:52.095 [192/722] Generating symbol file lib/librte_pci.so.24.2.p/librte_pci.so.24.2.symbols 00:02:52.095 [193/722] Linking target lib/librte_timer.so.24.2 00:02:52.095 [194/722] Linking static target lib/acl/libavx2_tmp.a 00:02:52.095 [195/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:52.095 [196/722] Generating symbol file lib/librte_rcu.so.24.2.p/librte_rcu.so.24.2.symbols 00:02:52.095 [197/722] Generating symbol file lib/librte_mempool.so.24.2.p/librte_mempool.so.24.2.symbols 00:02:52.095 [198/722] Generating symbol file lib/librte_timer.so.24.2.p/librte_timer.so.24.2.symbols 00:02:52.095 [199/722] Linking target lib/librte_mbuf.so.24.2 00:02:52.095 [200/722] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.095 [201/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:52.353 [202/722] Generating symbol file lib/librte_mbuf.so.24.2.p/librte_mbuf.so.24.2.symbols 00:02:52.353 [203/722] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:52.353 [204/722] Linking static target lib/acl/libavx512_tmp.a 00:02:52.353 [205/722] Linking target lib/librte_net.so.24.2 00:02:52.353 [206/722] Linking target lib/librte_bbdev.so.24.2 00:02:52.353 [207/722] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:52.353 [208/722] Linking static target lib/librte_acl.a 00:02:52.353 [209/722] Generating symbol file lib/librte_net.so.24.2.p/librte_net.so.24.2.symbols 00:02:52.611 [210/722] Linking target lib/librte_cmdline.so.24.2 00:02:52.611 [211/722] Linking target lib/librte_hash.so.24.2 00:02:52.611 [212/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:52.611 [213/722] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:52.611 [214/722] Linking static target lib/librte_cfgfile.a 00:02:52.611 [215/722] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.611 [216/722] Generating symbol file lib/librte_hash.so.24.2.p/librte_hash.so.24.2.symbols 00:02:52.868 [217/722] Linking target lib/librte_acl.so.24.2 00:02:52.868 [218/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:52.868 [219/722] Generating symbol file lib/librte_acl.so.24.2.p/librte_acl.so.24.2.symbols 00:02:53.127 [220/722] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.127 [221/722] Linking target lib/librte_cfgfile.so.24.2 00:02:53.127 [222/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:53.127 [223/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:53.385 [224/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:53.385 [225/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:53.385 [226/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:53.643 [227/722] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:53.643 [228/722] Linking static target lib/librte_bpf.a 00:02:53.643 [229/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:53.643 [230/722] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:53.643 [231/722] Linking static target lib/librte_compressdev.a 00:02:53.901 [232/722] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.901 [233/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:53.901 [234/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:54.160 [235/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:54.160 [236/722] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:54.160 [237/722] Linking static target lib/librte_distributor.a 00:02:54.160 [238/722] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.160 [239/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:54.160 [240/722] Linking target lib/librte_compressdev.so.24.2 00:02:54.417 [241/722] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.417 [242/722] Linking target lib/librte_distributor.so.24.2 00:02:54.417 [243/722] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:54.417 [244/722] Linking static target lib/librte_dmadev.a 00:02:54.417 [245/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:54.982 [246/722] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:54.982 [247/722] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.982 [248/722] Linking target lib/librte_dmadev.so.24.2 00:02:55.239 [249/722] Generating symbol file lib/librte_dmadev.so.24.2.p/librte_dmadev.so.24.2.symbols 00:02:55.239 [250/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:55.497 [251/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:55.497 [252/722] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:55.497 [253/722] Linking static target lib/librte_efd.a 00:02:55.754 [254/722] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:55.754 [255/722] Linking static target lib/librte_cryptodev.a 00:02:55.754 [256/722] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.754 [257/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:55.754 [258/722] Linking target lib/librte_efd.so.24.2 00:02:56.012 [259/722] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:56.012 [260/722] Linking static target lib/librte_dispatcher.a 00:02:56.012 [261/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:56.270 [262/722] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:56.270 [263/722] Linking static target lib/librte_gpudev.a 00:02:56.528 [264/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:56.528 [265/722] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.528 [266/722] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.528 [267/722] Linking target lib/librte_ethdev.so.24.2 00:02:56.528 [268/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:56.528 [269/722] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:56.787 [270/722] Generating symbol file lib/librte_ethdev.so.24.2.p/librte_ethdev.so.24.2.symbols 00:02:56.787 [271/722] Linking target lib/librte_metrics.so.24.2 00:02:56.787 [272/722] Generating symbol file lib/librte_metrics.so.24.2.p/librte_metrics.so.24.2.symbols 00:02:57.045 [273/722] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:57.045 [274/722] Linking target lib/librte_bitratestats.so.24.2 00:02:57.045 [275/722] Linking target lib/librte_bpf.so.24.2 00:02:57.045 [276/722] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.045 [277/722] Linking target lib/librte_cryptodev.so.24.2 00:02:57.045 [278/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:57.045 [279/722] Generating symbol file lib/librte_bpf.so.24.2.p/librte_bpf.so.24.2.symbols 00:02:57.045 [280/722] Generating symbol file lib/librte_cryptodev.so.24.2.p/librte_cryptodev.so.24.2.symbols 00:02:57.304 [281/722] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.304 [282/722] Linking target lib/librte_gpudev.so.24.2 00:02:57.304 [283/722] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:57.304 [284/722] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:57.304 [285/722] Linking static target lib/librte_eventdev.a 00:02:57.304 [286/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:57.562 [287/722] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:57.562 [288/722] Linking static target lib/librte_gro.a 00:02:57.562 [289/722] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:57.562 [290/722] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:57.820 [291/722] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:57.820 [292/722] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.820 [293/722] Linking target lib/librte_gro.so.24.2 00:02:57.820 [294/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:57.820 [295/722] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:57.820 [296/722] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:57.820 [297/722] Linking static target lib/librte_gso.a 00:02:58.078 [298/722] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.078 [299/722] Linking target lib/librte_gso.so.24.2 00:02:58.336 [300/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:58.336 [301/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:58.336 [302/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:58.336 [303/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:58.336 [304/722] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:58.336 [305/722] Linking static target lib/librte_jobstats.a 00:02:58.620 [306/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:58.620 [307/722] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:58.620 [308/722] Linking static target lib/librte_latencystats.a 00:02:58.620 [309/722] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:58.620 [310/722] Linking static target lib/librte_ip_frag.a 00:02:58.620 [311/722] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.881 [312/722] Linking target lib/librte_jobstats.so.24.2 00:02:58.881 [313/722] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.881 [314/722] Linking target lib/librte_latencystats.so.24.2 00:02:58.881 [315/722] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.881 [316/722] Linking target lib/librte_ip_frag.so.24.2 00:02:58.881 [317/722] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:58.881 [318/722] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:58.881 [319/722] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:59.140 [320/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:59.140 [321/722] Generating symbol file lib/librte_ip_frag.so.24.2.p/librte_ip_frag.so.24.2.symbols 00:02:59.140 [322/722] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:59.140 [323/722] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:59.398 [324/722] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:59.656 [325/722] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.656 [326/722] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:59.656 [327/722] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:59.656 [328/722] Linking target lib/librte_eventdev.so.24.2 00:02:59.656 [329/722] Linking static target lib/librte_lpm.a 00:02:59.656 [330/722] Generating symbol file lib/librte_eventdev.so.24.2.p/librte_eventdev.so.24.2.symbols 00:02:59.656 [331/722] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:59.915 [332/722] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:59.915 [333/722] Linking target lib/librte_dispatcher.so.24.2 00:02:59.915 [334/722] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:59.915 [335/722] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:59.915 [336/722] Linking static target lib/librte_pcapng.a 00:02:59.915 [337/722] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:59.915 [338/722] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.915 [339/722] Linking target lib/librte_lpm.so.24.2 00:03:00.173 [340/722] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:00.173 [341/722] Generating symbol file lib/librte_lpm.so.24.2.p/librte_lpm.so.24.2.symbols 00:03:00.173 [342/722] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.173 [343/722] Linking target lib/librte_pcapng.so.24.2 00:03:00.431 [344/722] Generating symbol file lib/librte_pcapng.so.24.2.p/librte_pcapng.so.24.2.symbols 00:03:00.431 [345/722] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:00.431 [346/722] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:00.431 [347/722] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:00.689 [348/722] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:00.689 [349/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:03:00.689 [350/722] Linking static target lib/librte_power.a 00:03:00.689 [351/722] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:00.689 [352/722] Linking static target lib/librte_regexdev.a 00:03:00.689 [353/722] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:00.689 [354/722] Linking static target lib/librte_rawdev.a 00:03:00.947 [355/722] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:00.947 [356/722] Linking static target lib/librte_member.a 00:03:00.947 [357/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:03:00.947 [358/722] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:03:00.947 [359/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:03:01.206 [360/722] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.206 [361/722] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.206 [362/722] Linking target lib/librte_member.so.24.2 00:03:01.206 [363/722] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:03:01.206 [364/722] Linking static target lib/librte_mldev.a 00:03:01.206 [365/722] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.206 [366/722] Linking target lib/librte_rawdev.so.24.2 00:03:01.464 [367/722] Linking target lib/librte_power.so.24.2 00:03:01.464 [368/722] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:01.464 [369/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:01.464 [370/722] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.464 [371/722] Linking target lib/librte_regexdev.so.24.2 00:03:01.722 [372/722] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:01.722 [373/722] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:01.722 [374/722] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:01.722 [375/722] Linking static target lib/librte_reorder.a 00:03:01.980 [376/722] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:01.980 [377/722] Linking static target lib/librte_rib.a 00:03:01.980 [378/722] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:01.980 [379/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:01.980 [380/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:02.238 [381/722] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:02.238 [382/722] Linking static target lib/librte_stack.a 00:03:02.238 [383/722] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.238 [384/722] Linking target lib/librte_reorder.so.24.2 00:03:02.238 [385/722] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:02.238 [386/722] Linking static target lib/librte_security.a 00:03:02.238 [387/722] Generating symbol file lib/librte_reorder.so.24.2.p/librte_reorder.so.24.2.symbols 00:03:02.238 [388/722] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.238 [389/722] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.497 [390/722] Linking target lib/librte_stack.so.24.2 00:03:02.497 [391/722] Linking target lib/librte_rib.so.24.2 00:03:02.497 [392/722] Generating symbol file lib/librte_rib.so.24.2.p/librte_rib.so.24.2.symbols 00:03:02.754 [393/722] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:02.754 [394/722] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.754 [395/722] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.754 [396/722] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:02.754 [397/722] Linking target lib/librte_security.so.24.2 00:03:02.754 [398/722] Linking target lib/librte_mldev.so.24.2 00:03:02.754 [399/722] Generating symbol file lib/librte_security.so.24.2.p/librte_security.so.24.2.symbols 00:03:03.012 [400/722] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.012 [401/722] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:03.012 [402/722] Linking static target lib/librte_sched.a 00:03:03.269 [403/722] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.527 [404/722] Linking target lib/librte_sched.so.24.2 00:03:03.527 [405/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:03.527 [406/722] Generating symbol file lib/librte_sched.so.24.2.p/librte_sched.so.24.2.symbols 00:03:03.527 [407/722] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:03.785 [408/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:03.785 [409/722] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:04.043 [410/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.043 [411/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:04.300 [412/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:04.300 [413/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:03:04.559 [414/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:03:04.559 [415/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:04.559 [416/722] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:04.817 [417/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:03:04.817 [418/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:03:04.817 [419/722] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:04.817 [420/722] Linking static target lib/librte_ipsec.a 00:03:05.075 [421/722] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:05.075 [422/722] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:05.334 [423/722] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.334 [424/722] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:03:05.334 [425/722] Linking target lib/librte_ipsec.so.24.2 00:03:05.334 [426/722] Compiling C object lib/librte_port.a.p/port_port_log.c.o 00:03:05.334 [427/722] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:05.334 [428/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:05.334 [429/722] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:05.334 [430/722] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:05.334 [431/722] Generating symbol file lib/librte_ipsec.so.24.2.p/librte_ipsec.so.24.2.symbols 00:03:06.268 [432/722] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:06.268 [433/722] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:06.268 [434/722] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:03:06.268 [435/722] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:06.268 [436/722] Linking static target lib/librte_pdcp.a 00:03:06.268 [437/722] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:06.268 [438/722] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:06.268 [439/722] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:06.268 [440/722] Linking static target lib/librte_fib.a 00:03:06.527 [441/722] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.785 [442/722] Linking target lib/librte_pdcp.so.24.2 00:03:06.785 [443/722] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.785 [444/722] Linking target lib/librte_fib.so.24.2 00:03:06.785 [445/722] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:07.351 [446/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:07.351 [447/722] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:07.351 [448/722] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:07.351 [449/722] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:07.609 [450/722] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:07.609 [451/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:07.609 [452/722] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:07.867 [453/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:08.126 [454/722] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:08.126 [455/722] Linking static target lib/librte_port.a 00:03:08.126 [456/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:08.126 [457/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:08.384 [458/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:08.384 [459/722] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:08.642 [460/722] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:08.642 [461/722] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.642 [462/722] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:08.642 [463/722] Linking target lib/librte_port.so.24.2 00:03:08.642 [464/722] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:08.642 [465/722] Linking static target lib/librte_pdump.a 00:03:08.642 [466/722] Generating symbol file lib/librte_port.so.24.2.p/librte_port.so.24.2.symbols 00:03:08.900 [467/722] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:08.900 [468/722] Linking target lib/librte_pdump.so.24.2 00:03:08.900 [469/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:09.158 [470/722] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:09.158 [471/722] Compiling C object lib/librte_table.a.p/table_table_log.c.o 00:03:09.416 [472/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:09.416 [473/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:09.416 [474/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:09.416 [475/722] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:09.673 [476/722] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:09.673 [477/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:09.931 [478/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:09.931 [479/722] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:09.931 [480/722] Linking static target lib/librte_table.a 00:03:10.189 [481/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:10.189 [482/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:10.755 [483/722] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:10.755 [484/722] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.755 [485/722] Linking target lib/librte_table.so.24.2 00:03:11.013 [486/722] Generating symbol file lib/librte_table.so.24.2.p/librte_table.so.24.2.symbols 00:03:11.013 [487/722] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:11.013 [488/722] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:11.013 [489/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:03:11.270 [490/722] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:11.527 [491/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:11.527 [492/722] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:11.785 [493/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:03:11.785 [494/722] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:11.785 [495/722] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:03:12.042 [496/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:12.042 [497/722] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:12.300 [498/722] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:03:12.300 [499/722] Linking static target lib/librte_graph.a 00:03:12.558 [500/722] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:12.558 [501/722] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:12.558 [502/722] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:03:12.816 [503/722] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.074 [504/722] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:03:13.074 [505/722] Linking target lib/librte_graph.so.24.2 00:03:13.074 [506/722] Generating symbol file lib/librte_graph.so.24.2.p/librte_graph.so.24.2.symbols 00:03:13.074 [507/722] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:13.074 [508/722] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:13.639 [509/722] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:03:13.639 [510/722] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:13.639 [511/722] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:03:13.639 [512/722] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:13.639 [513/722] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:03:13.896 [514/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:13.896 [515/722] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:03:14.154 [516/722] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:14.154 [517/722] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:14.411 [518/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:14.411 [519/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:14.411 [520/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:14.411 [521/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:14.668 [522/722] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:03:14.668 [523/722] Linking static target lib/librte_node.a 00:03:14.668 [524/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:14.936 [525/722] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.936 [526/722] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:14.936 [527/722] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:14.936 [528/722] Linking target lib/librte_node.so.24.2 00:03:15.219 [529/722] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:15.219 [530/722] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:15.219 [531/722] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:15.219 [532/722] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.219 [533/722] Linking static target drivers/librte_bus_vdev.a 00:03:15.219 [534/722] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:15.219 [535/722] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.219 [536/722] Linking static target drivers/librte_bus_pci.a 00:03:15.477 [537/722] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.477 [538/722] Compiling C object drivers/librte_bus_pci.so.24.2.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:15.477 [539/722] Compiling C object drivers/librte_bus_vdev.so.24.2.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:15.477 [540/722] Linking target drivers/librte_bus_vdev.so.24.2 00:03:15.477 [541/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:15.477 [542/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:15.477 [543/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:15.477 [544/722] Generating symbol file drivers/librte_bus_vdev.so.24.2.p/librte_bus_vdev.so.24.2.symbols 00:03:15.735 [545/722] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:15.735 [546/722] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:15.735 [547/722] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.992 [548/722] Linking target drivers/librte_bus_pci.so.24.2 00:03:15.992 [549/722] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:15.992 [550/722] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:15.992 [551/722] Linking static target drivers/librte_mempool_ring.a 00:03:15.992 [552/722] Compiling C object drivers/librte_mempool_ring.so.24.2.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:15.992 [553/722] Linking target drivers/librte_mempool_ring.so.24.2 00:03:15.992 [554/722] Generating symbol file drivers/librte_bus_pci.so.24.2.p/librte_bus_pci.so.24.2.symbols 00:03:16.250 [555/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:16.508 [556/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:16.766 [557/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:16.766 [558/722] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:17.024 [559/722] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:17.590 [560/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:17.848 [561/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:17.848 [562/722] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:03:17.848 [563/722] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:03:18.106 [564/722] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:18.106 [565/722] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:18.364 [566/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:18.622 [567/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:18.622 [568/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:18.622 [569/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:18.879 [570/722] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output) 00:03:18.879 [571/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:18.879 [572/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:19.445 [573/722] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:19.445 [574/722] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:19.704 [575/722] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:19.704 [576/722] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:20.270 [577/722] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:20.270 [578/722] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:20.270 [579/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:20.270 [580/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:20.270 [581/722] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:20.528 [582/722] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:20.528 [583/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:21.095 [584/722] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:21.095 [585/722] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:21.095 [586/722] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o 00:03:21.095 [587/722] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:21.095 [588/722] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:21.095 [589/722] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:21.095 [590/722] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:21.095 [591/722] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:21.354 [592/722] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:21.354 [593/722] Linking static target lib/librte_vhost.a 00:03:21.354 [594/722] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:21.612 [595/722] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:21.612 [596/722] Linking static target drivers/librte_net_i40e.a 00:03:21.612 [597/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:21.612 [598/722] Compiling C object drivers/librte_net_i40e.so.24.2.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:21.870 [599/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:21.870 [600/722] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:21.870 [601/722] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:21.870 [602/722] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:22.437 [603/722] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.437 [604/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:22.437 [605/722] Linking target drivers/librte_net_i40e.so.24.2 00:03:22.437 [606/722] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:22.437 [607/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:22.695 [608/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:22.695 [609/722] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.695 [610/722] Linking target lib/librte_vhost.so.24.2 00:03:23.277 [611/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:23.277 [612/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:23.277 [613/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:23.277 [614/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:23.544 [615/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:23.544 [616/722] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:23.544 [617/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:23.544 [618/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:24.110 [619/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:24.110 [620/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:24.368 [621/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:24.368 [622/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:24.368 [623/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:24.368 [624/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:24.626 [625/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:24.626 [626/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:24.626 [627/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:24.626 [628/722] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:24.884 [629/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:25.142 [630/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:25.401 [631/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:25.401 [632/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:25.401 [633/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:25.659 [634/722] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:26.593 [635/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:26.593 [636/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:26.593 [637/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:26.593 [638/722] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:26.593 [639/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:26.593 [640/722] Linking static target lib/librte_pipeline.a 00:03:26.593 [641/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:26.850 [642/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:26.850 [643/722] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:27.107 [644/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:27.107 [645/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:27.107 [646/722] Linking target app/dpdk-dumpcap 00:03:27.364 [647/722] Linking target app/dpdk-graph 00:03:27.364 [648/722] Linking target app/dpdk-pdump 00:03:27.364 [649/722] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:27.621 [650/722] Linking target app/dpdk-proc-info 00:03:27.621 [651/722] Linking target app/dpdk-test-acl 00:03:27.621 [652/722] Linking target app/dpdk-test-cmdline 00:03:27.878 [653/722] Linking target app/dpdk-test-compress-perf 00:03:27.878 [654/722] Linking target app/dpdk-test-crypto-perf 00:03:27.878 [655/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:27.878 [656/722] Linking target app/dpdk-test-dma-perf 00:03:28.135 [657/722] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:28.135 [658/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:28.135 [659/722] Linking target app/dpdk-test-fib 00:03:28.135 [660/722] Linking target app/dpdk-test-gpudev 00:03:28.393 [661/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:28.650 [662/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:28.650 [663/722] Linking target app/dpdk-test-flow-perf 00:03:28.650 [664/722] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:28.650 [665/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:28.650 [666/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:28.908 [667/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:28.908 [668/722] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:28.908 [669/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:29.165 [670/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:29.165 [671/722] Linking target app/dpdk-test-eventdev 00:03:29.165 [672/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:29.165 [673/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:29.422 [674/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:29.422 [675/722] Linking target app/dpdk-test-bbdev 00:03:29.422 [676/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:29.680 [677/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:29.680 [678/722] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.680 [679/722] Linking target lib/librte_pipeline.so.24.2 00:03:29.938 [680/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:29.938 [681/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:29.938 [682/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:29.938 [683/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:30.195 [684/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:30.195 [685/722] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:30.452 [686/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:30.452 [687/722] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:30.452 [688/722] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:30.710 [689/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:30.968 [690/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:30.968 [691/722] Linking target app/dpdk-test-pipeline 00:03:30.968 [692/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:30.968 [693/722] Linking target app/dpdk-test-mldev 00:03:31.225 [694/722] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:31.816 [695/722] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:31.816 [696/722] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:31.816 [697/722] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:31.816 [698/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:31.816 [699/722] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:32.074 [700/722] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:32.332 [701/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:32.332 [702/722] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:32.332 [703/722] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:32.589 [704/722] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:32.845 [705/722] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:33.103 [706/722] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:33.361 [707/722] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:33.619 [708/722] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:33.619 [709/722] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:33.619 [710/722] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:33.619 [711/722] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:33.877 [712/722] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:33.877 [713/722] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:33.877 [714/722] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:33.877 [715/722] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:34.135 [716/722] Linking target app/dpdk-test-sad 00:03:34.135 [717/722] Linking target app/dpdk-test-regex 00:03:34.135 [718/722] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:34.392 [719/722] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o 00:03:34.649 [720/722] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:34.907 [721/722] Linking target app/dpdk-testpmd 00:03:35.164 [722/722] Linking target app/dpdk-test-security-perf 00:03:35.164 02:48:14 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:35.164 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:35.164 [0/1] Installing files. 00:03:35.423 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:35.423 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.424 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.425 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.426 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:35.427 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:35.686 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:35.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.687 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:35.687 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.687 Installing lib/librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing lib/librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing drivers/librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:35.948 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing drivers/librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:35.948 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing drivers/librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:35.948 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:35.948 Installing drivers/librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2 00:03:35.948 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.948 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.949 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.950 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.951 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:35.951 Installing symlink pointing to librte_log.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:35.951 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:35.951 Installing symlink pointing to librte_kvargs.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:35.951 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:35.951 Installing symlink pointing to librte_argparse.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.24 00:03:35.951 Installing symlink pointing to librte_argparse.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so 00:03:35.951 Installing symlink pointing to librte_telemetry.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:35.951 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:35.951 Installing symlink pointing to librte_eal.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:35.951 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:35.951 Installing symlink pointing to librte_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:35.951 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:35.951 Installing symlink pointing to librte_rcu.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:35.951 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:35.951 Installing symlink pointing to librte_mempool.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:35.951 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:35.951 Installing symlink pointing to librte_mbuf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:35.951 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:35.951 Installing symlink pointing to librte_net.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:35.951 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:35.951 Installing symlink pointing to librte_meter.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:35.951 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:35.951 Installing symlink pointing to librte_ethdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:35.951 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:35.951 Installing symlink pointing to librte_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:35.951 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:35.951 Installing symlink pointing to librte_cmdline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:35.951 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:35.951 Installing symlink pointing to librte_metrics.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:35.951 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:35.951 Installing symlink pointing to librte_hash.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:35.951 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:35.951 Installing symlink pointing to librte_timer.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:35.951 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:35.951 Installing symlink pointing to librte_acl.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:35.951 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:35.951 Installing symlink pointing to librte_bbdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:35.951 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:35.952 Installing symlink pointing to librte_bitratestats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:35.952 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:35.952 Installing symlink pointing to librte_bpf.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:35.952 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:35.952 Installing symlink pointing to librte_cfgfile.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:35.952 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:35.952 Installing symlink pointing to librte_compressdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:35.952 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:35.952 Installing symlink pointing to librte_cryptodev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:35.952 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:35.952 Installing symlink pointing to librte_distributor.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:35.952 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:35.952 Installing symlink pointing to librte_dmadev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:35.952 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:35.952 Installing symlink pointing to librte_efd.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:35.952 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:35.952 Installing symlink pointing to librte_eventdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:35.952 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:35.952 Installing symlink pointing to librte_dispatcher.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:35.952 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:35.952 Installing symlink pointing to librte_gpudev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:35.952 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:35.952 Installing symlink pointing to librte_gro.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:35.952 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:35.952 Installing symlink pointing to librte_gso.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:35.952 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:35.952 Installing symlink pointing to librte_ip_frag.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:35.952 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:35.952 Installing symlink pointing to librte_jobstats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:35.952 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:35.952 Installing symlink pointing to librte_latencystats.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:35.952 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:35.952 Installing symlink pointing to librte_lpm.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:35.952 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:35.952 Installing symlink pointing to librte_member.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:35.952 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:35.952 Installing symlink pointing to librte_pcapng.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:35.952 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:35.952 Installing symlink pointing to librte_power.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:35.952 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:35.952 Installing symlink pointing to librte_rawdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:35.952 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:35.952 Installing symlink pointing to librte_regexdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:35.952 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:35.952 Installing symlink pointing to librte_mldev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:35.952 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:35.952 Installing symlink pointing to librte_rib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:35.952 './librte_bus_pci.so' -> 'dpdk/pmds-24.2/librte_bus_pci.so' 00:03:35.952 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24' 00:03:35.952 './librte_bus_pci.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_pci.so.24.2' 00:03:35.952 './librte_bus_vdev.so' -> 'dpdk/pmds-24.2/librte_bus_vdev.so' 00:03:35.952 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24' 00:03:35.952 './librte_bus_vdev.so.24.2' -> 'dpdk/pmds-24.2/librte_bus_vdev.so.24.2' 00:03:35.952 './librte_mempool_ring.so' -> 'dpdk/pmds-24.2/librte_mempool_ring.so' 00:03:35.952 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24' 00:03:35.952 './librte_mempool_ring.so.24.2' -> 'dpdk/pmds-24.2/librte_mempool_ring.so.24.2' 00:03:35.952 './librte_net_i40e.so' -> 'dpdk/pmds-24.2/librte_net_i40e.so' 00:03:35.952 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24' 00:03:35.952 './librte_net_i40e.so.24.2' -> 'dpdk/pmds-24.2/librte_net_i40e.so.24.2' 00:03:35.952 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:35.952 Installing symlink pointing to librte_reorder.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:35.952 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:35.952 Installing symlink pointing to librte_sched.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:35.952 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:35.952 Installing symlink pointing to librte_security.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:35.952 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:35.952 Installing symlink pointing to librte_stack.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:35.952 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:35.952 Installing symlink pointing to librte_vhost.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:35.952 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:35.952 Installing symlink pointing to librte_ipsec.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:35.952 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:35.952 Installing symlink pointing to librte_pdcp.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:35.952 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:35.952 Installing symlink pointing to librte_fib.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:35.952 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:35.952 Installing symlink pointing to librte_port.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:35.952 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:35.952 Installing symlink pointing to librte_pdump.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:35.952 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:35.952 Installing symlink pointing to librte_table.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:35.952 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:35.952 Installing symlink pointing to librte_pipeline.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:35.952 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:35.952 Installing symlink pointing to librte_graph.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:35.952 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:35.952 Installing symlink pointing to librte_node.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:35.952 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:35.952 Installing symlink pointing to librte_bus_pci.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24 00:03:35.952 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:03:35.952 Installing symlink pointing to librte_bus_vdev.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24 00:03:35.953 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:03:35.953 Installing symlink pointing to librte_mempool_ring.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24 00:03:35.953 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:03:35.953 Installing symlink pointing to librte_net_i40e.so.24.2 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24 00:03:35.953 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:03:35.953 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.2' 00:03:36.211 02:48:15 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:36.211 02:48:15 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:36.211 02:48:15 -- common/autobuild_common.sh@200 -- $ cat 00:03:36.211 02:48:15 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:36.211 00:03:36.211 real 1m3.052s 00:03:36.211 user 7m50.661s 00:03:36.211 sys 1m5.882s 00:03:36.211 ************************************ 00:03:36.211 02:48:15 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:03:36.211 02:48:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:36.211 END TEST build_native_dpdk 00:03:36.211 ************************************ 00:03:36.211 02:48:15 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:36.211 02:48:15 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:36.211 02:48:15 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:36.211 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:36.469 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:36.469 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:36.469 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:36.727 Using 'verbs' RDMA provider 00:03:52.551 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:04.756 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:04.756 Creating mk/config.mk...done. 00:04:04.756 Creating mk/cc.flags.mk...done. 00:04:04.756 Type 'make' to build. 00:04:04.756 02:48:42 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:04:04.756 02:48:42 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:04:04.756 02:48:42 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:04:04.756 02:48:42 -- common/autotest_common.sh@10 -- $ set +x 00:04:04.756 ************************************ 00:04:04.756 START TEST make 00:04:04.756 ************************************ 00:04:04.756 02:48:42 -- common/autotest_common.sh@1111 -- $ make -j10 00:04:04.756 make[1]: Nothing to be done for 'all'. 00:04:31.309 CC lib/log/log.o 00:04:31.309 CC lib/log/log_flags.o 00:04:31.309 CC lib/log/log_deprecated.o 00:04:31.309 CC lib/ut/ut.o 00:04:31.309 CC lib/ut_mock/mock.o 00:04:31.309 LIB libspdk_ut_mock.a 00:04:31.309 SO libspdk_ut_mock.so.6.0 00:04:31.309 LIB libspdk_ut.a 00:04:31.309 LIB libspdk_log.a 00:04:31.309 SYMLINK libspdk_ut_mock.so 00:04:31.309 SO libspdk_ut.so.2.0 00:04:31.309 SO libspdk_log.so.7.0 00:04:31.309 SYMLINK libspdk_ut.so 00:04:31.309 SYMLINK libspdk_log.so 00:04:31.309 CC lib/util/base64.o 00:04:31.309 CC lib/ioat/ioat.o 00:04:31.309 CC lib/util/bit_array.o 00:04:31.309 CC lib/dma/dma.o 00:04:31.309 CXX lib/trace_parser/trace.o 00:04:31.309 CC lib/util/crc16.o 00:04:31.309 CC lib/util/crc32.o 00:04:31.309 CC lib/util/cpuset.o 00:04:31.309 CC lib/util/crc32c.o 00:04:31.309 CC lib/vfio_user/host/vfio_user_pci.o 00:04:31.309 CC lib/util/crc32_ieee.o 00:04:31.309 CC lib/util/crc64.o 00:04:31.309 CC lib/util/dif.o 00:04:31.309 LIB libspdk_dma.a 00:04:31.309 CC lib/util/fd.o 00:04:31.309 SO libspdk_dma.so.4.0 00:04:31.309 CC lib/util/file.o 00:04:31.309 CC lib/vfio_user/host/vfio_user.o 00:04:31.309 SYMLINK libspdk_dma.so 00:04:31.309 CC lib/util/hexlify.o 00:04:31.309 CC lib/util/iov.o 00:04:31.309 CC lib/util/math.o 00:04:31.309 LIB libspdk_ioat.a 00:04:31.309 CC lib/util/pipe.o 00:04:31.309 SO libspdk_ioat.so.7.0 00:04:31.309 CC lib/util/strerror_tls.o 00:04:31.309 CC lib/util/string.o 00:04:31.309 SYMLINK libspdk_ioat.so 00:04:31.309 CC lib/util/uuid.o 00:04:31.309 LIB libspdk_vfio_user.a 00:04:31.309 CC lib/util/fd_group.o 00:04:31.309 SO libspdk_vfio_user.so.5.0 00:04:31.309 CC lib/util/xor.o 00:04:31.309 CC lib/util/zipf.o 00:04:31.309 SYMLINK libspdk_vfio_user.so 00:04:31.309 LIB libspdk_util.a 00:04:31.309 SO libspdk_util.so.9.0 00:04:31.309 SYMLINK libspdk_util.so 00:04:31.309 LIB libspdk_trace_parser.a 00:04:31.309 SO libspdk_trace_parser.so.5.0 00:04:31.309 SYMLINK libspdk_trace_parser.so 00:04:31.309 CC lib/rdma/common.o 00:04:31.309 CC lib/rdma/rdma_verbs.o 00:04:31.309 CC lib/env_dpdk/env.o 00:04:31.309 CC lib/env_dpdk/memory.o 00:04:31.309 CC lib/env_dpdk/pci.o 00:04:31.309 CC lib/conf/conf.o 00:04:31.309 CC lib/idxd/idxd.o 00:04:31.309 CC lib/env_dpdk/init.o 00:04:31.309 CC lib/json/json_parse.o 00:04:31.309 CC lib/vmd/vmd.o 00:04:31.309 CC lib/vmd/led.o 00:04:31.309 LIB libspdk_conf.a 00:04:31.309 CC lib/json/json_util.o 00:04:31.309 SO libspdk_conf.so.6.0 00:04:31.309 LIB libspdk_rdma.a 00:04:31.309 SYMLINK libspdk_conf.so 00:04:31.309 CC lib/env_dpdk/threads.o 00:04:31.309 SO libspdk_rdma.so.6.0 00:04:31.309 CC lib/env_dpdk/pci_ioat.o 00:04:31.309 CC lib/env_dpdk/pci_virtio.o 00:04:31.309 CC lib/idxd/idxd_user.o 00:04:31.309 SYMLINK libspdk_rdma.so 00:04:31.309 CC lib/env_dpdk/pci_vmd.o 00:04:31.309 CC lib/env_dpdk/pci_idxd.o 00:04:31.309 CC lib/json/json_write.o 00:04:31.309 CC lib/env_dpdk/pci_event.o 00:04:31.309 CC lib/env_dpdk/sigbus_handler.o 00:04:31.309 CC lib/env_dpdk/pci_dpdk.o 00:04:31.309 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:31.309 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:31.309 LIB libspdk_vmd.a 00:04:31.309 LIB libspdk_idxd.a 00:04:31.309 SO libspdk_vmd.so.6.0 00:04:31.309 SO libspdk_idxd.so.12.0 00:04:31.309 SYMLINK libspdk_idxd.so 00:04:31.309 SYMLINK libspdk_vmd.so 00:04:31.309 LIB libspdk_json.a 00:04:31.309 SO libspdk_json.so.6.0 00:04:31.309 SYMLINK libspdk_json.so 00:04:31.309 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:31.309 CC lib/jsonrpc/jsonrpc_server.o 00:04:31.309 CC lib/jsonrpc/jsonrpc_client.o 00:04:31.309 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:31.309 LIB libspdk_env_dpdk.a 00:04:31.309 LIB libspdk_jsonrpc.a 00:04:31.309 SO libspdk_jsonrpc.so.6.0 00:04:31.309 SO libspdk_env_dpdk.so.14.0 00:04:31.309 SYMLINK libspdk_jsonrpc.so 00:04:31.309 SYMLINK libspdk_env_dpdk.so 00:04:31.309 CC lib/rpc/rpc.o 00:04:31.309 LIB libspdk_rpc.a 00:04:31.309 SO libspdk_rpc.so.6.0 00:04:31.309 SYMLINK libspdk_rpc.so 00:04:31.309 CC lib/keyring/keyring.o 00:04:31.309 CC lib/notify/notify.o 00:04:31.309 CC lib/trace/trace.o 00:04:31.309 CC lib/keyring/keyring_rpc.o 00:04:31.309 CC lib/trace/trace_flags.o 00:04:31.309 CC lib/notify/notify_rpc.o 00:04:31.309 CC lib/trace/trace_rpc.o 00:04:31.568 LIB libspdk_notify.a 00:04:31.568 SO libspdk_notify.so.6.0 00:04:31.568 LIB libspdk_keyring.a 00:04:31.568 SYMLINK libspdk_notify.so 00:04:31.568 SO libspdk_keyring.so.1.0 00:04:31.568 LIB libspdk_trace.a 00:04:31.568 SO libspdk_trace.so.10.0 00:04:31.828 SYMLINK libspdk_keyring.so 00:04:31.828 SYMLINK libspdk_trace.so 00:04:32.086 CC lib/sock/sock.o 00:04:32.086 CC lib/sock/sock_rpc.o 00:04:32.086 CC lib/thread/thread.o 00:04:32.086 CC lib/thread/iobuf.o 00:04:32.654 LIB libspdk_sock.a 00:04:32.654 SO libspdk_sock.so.9.0 00:04:32.654 SYMLINK libspdk_sock.so 00:04:32.913 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:32.913 CC lib/nvme/nvme_ctrlr.o 00:04:32.913 CC lib/nvme/nvme_ns_cmd.o 00:04:32.913 CC lib/nvme/nvme_fabric.o 00:04:32.913 CC lib/nvme/nvme_ns.o 00:04:32.913 CC lib/nvme/nvme_qpair.o 00:04:32.913 CC lib/nvme/nvme_pcie_common.o 00:04:32.913 CC lib/nvme/nvme_pcie.o 00:04:32.913 CC lib/nvme/nvme.o 00:04:33.478 LIB libspdk_thread.a 00:04:33.736 SO libspdk_thread.so.10.0 00:04:33.736 CC lib/nvme/nvme_quirks.o 00:04:33.736 CC lib/nvme/nvme_transport.o 00:04:33.736 SYMLINK libspdk_thread.so 00:04:33.736 CC lib/nvme/nvme_discovery.o 00:04:33.736 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:33.736 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:33.736 CC lib/nvme/nvme_tcp.o 00:04:33.994 CC lib/nvme/nvme_opal.o 00:04:33.994 CC lib/nvme/nvme_io_msg.o 00:04:33.994 CC lib/nvme/nvme_poll_group.o 00:04:34.250 CC lib/nvme/nvme_zns.o 00:04:34.250 CC lib/nvme/nvme_stubs.o 00:04:34.508 CC lib/nvme/nvme_auth.o 00:04:34.508 CC lib/nvme/nvme_cuse.o 00:04:34.508 CC lib/nvme/nvme_rdma.o 00:04:34.766 CC lib/accel/accel.o 00:04:34.766 CC lib/blob/blobstore.o 00:04:34.766 CC lib/init/json_config.o 00:04:35.023 CC lib/init/subsystem.o 00:04:35.023 CC lib/virtio/virtio.o 00:04:35.023 CC lib/virtio/virtio_vhost_user.o 00:04:35.023 CC lib/blob/request.o 00:04:35.303 CC lib/init/subsystem_rpc.o 00:04:35.303 CC lib/init/rpc.o 00:04:35.303 CC lib/accel/accel_rpc.o 00:04:35.303 CC lib/accel/accel_sw.o 00:04:35.303 CC lib/blob/zeroes.o 00:04:35.303 CC lib/blob/blob_bs_dev.o 00:04:35.586 CC lib/virtio/virtio_vfio_user.o 00:04:35.586 CC lib/virtio/virtio_pci.o 00:04:35.586 LIB libspdk_init.a 00:04:35.586 SO libspdk_init.so.5.0 00:04:35.586 SYMLINK libspdk_init.so 00:04:35.844 LIB libspdk_virtio.a 00:04:35.844 CC lib/event/app.o 00:04:35.844 CC lib/event/reactor.o 00:04:35.844 CC lib/event/log_rpc.o 00:04:35.844 CC lib/event/scheduler_static.o 00:04:35.844 CC lib/event/app_rpc.o 00:04:35.844 SO libspdk_virtio.so.7.0 00:04:35.844 LIB libspdk_accel.a 00:04:35.844 SO libspdk_accel.so.15.0 00:04:35.844 SYMLINK libspdk_virtio.so 00:04:35.844 LIB libspdk_nvme.a 00:04:36.103 SYMLINK libspdk_accel.so 00:04:36.103 SO libspdk_nvme.so.13.0 00:04:36.103 CC lib/bdev/bdev.o 00:04:36.103 CC lib/bdev/bdev_rpc.o 00:04:36.103 CC lib/bdev/part.o 00:04:36.103 CC lib/bdev/bdev_zone.o 00:04:36.103 CC lib/bdev/scsi_nvme.o 00:04:36.103 LIB libspdk_event.a 00:04:36.362 SO libspdk_event.so.13.0 00:04:36.362 SYMLINK libspdk_nvme.so 00:04:36.362 SYMLINK libspdk_event.so 00:04:37.739 LIB libspdk_blob.a 00:04:37.739 SO libspdk_blob.so.11.0 00:04:37.998 SYMLINK libspdk_blob.so 00:04:38.256 CC lib/lvol/lvol.o 00:04:38.256 CC lib/blobfs/blobfs.o 00:04:38.256 CC lib/blobfs/tree.o 00:04:38.824 LIB libspdk_bdev.a 00:04:38.824 SO libspdk_bdev.so.15.0 00:04:39.083 LIB libspdk_blobfs.a 00:04:39.083 SO libspdk_blobfs.so.10.0 00:04:39.083 SYMLINK libspdk_bdev.so 00:04:39.083 SYMLINK libspdk_blobfs.so 00:04:39.083 LIB libspdk_lvol.a 00:04:39.083 SO libspdk_lvol.so.10.0 00:04:39.341 SYMLINK libspdk_lvol.so 00:04:39.341 CC lib/nbd/nbd_rpc.o 00:04:39.341 CC lib/nvmf/ctrlr.o 00:04:39.341 CC lib/nbd/nbd.o 00:04:39.341 CC lib/nvmf/ctrlr_discovery.o 00:04:39.341 CC lib/ublk/ublk_rpc.o 00:04:39.341 CC lib/ublk/ublk.o 00:04:39.341 CC lib/scsi/lun.o 00:04:39.341 CC lib/nvmf/ctrlr_bdev.o 00:04:39.341 CC lib/scsi/dev.o 00:04:39.341 CC lib/ftl/ftl_core.o 00:04:39.341 CC lib/ftl/ftl_init.o 00:04:39.341 CC lib/ftl/ftl_layout.o 00:04:39.599 CC lib/ftl/ftl_debug.o 00:04:39.599 CC lib/scsi/port.o 00:04:39.599 CC lib/scsi/scsi.o 00:04:39.599 LIB libspdk_nbd.a 00:04:39.599 SO libspdk_nbd.so.7.0 00:04:39.857 CC lib/nvmf/subsystem.o 00:04:39.857 SYMLINK libspdk_nbd.so 00:04:39.857 CC lib/scsi/scsi_bdev.o 00:04:39.857 CC lib/ftl/ftl_io.o 00:04:39.857 CC lib/scsi/scsi_pr.o 00:04:39.857 CC lib/scsi/scsi_rpc.o 00:04:39.858 CC lib/ftl/ftl_sb.o 00:04:39.858 CC lib/scsi/task.o 00:04:39.858 CC lib/nvmf/nvmf.o 00:04:39.858 LIB libspdk_ublk.a 00:04:39.858 CC lib/nvmf/nvmf_rpc.o 00:04:40.115 CC lib/nvmf/transport.o 00:04:40.115 SO libspdk_ublk.so.3.0 00:04:40.115 CC lib/ftl/ftl_l2p.o 00:04:40.115 CC lib/ftl/ftl_l2p_flat.o 00:04:40.115 SYMLINK libspdk_ublk.so 00:04:40.115 CC lib/ftl/ftl_nv_cache.o 00:04:40.115 CC lib/ftl/ftl_band.o 00:04:40.373 LIB libspdk_scsi.a 00:04:40.373 CC lib/nvmf/tcp.o 00:04:40.373 CC lib/ftl/ftl_band_ops.o 00:04:40.373 SO libspdk_scsi.so.9.0 00:04:40.373 SYMLINK libspdk_scsi.so 00:04:40.373 CC lib/ftl/ftl_writer.o 00:04:40.631 CC lib/ftl/ftl_rq.o 00:04:40.631 CC lib/ftl/ftl_reloc.o 00:04:40.631 CC lib/ftl/ftl_l2p_cache.o 00:04:40.631 CC lib/ftl/ftl_p2l.o 00:04:40.631 CC lib/ftl/mngt/ftl_mngt.o 00:04:40.889 CC lib/nvmf/rdma.o 00:04:40.889 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:40.889 CC lib/iscsi/conn.o 00:04:40.889 CC lib/vhost/vhost.o 00:04:40.889 CC lib/vhost/vhost_rpc.o 00:04:41.147 CC lib/vhost/vhost_scsi.o 00:04:41.147 CC lib/vhost/vhost_blk.o 00:04:41.147 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:41.147 CC lib/iscsi/init_grp.o 00:04:41.147 CC lib/iscsi/iscsi.o 00:04:41.147 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:41.406 CC lib/vhost/rte_vhost_user.o 00:04:41.406 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:41.664 CC lib/iscsi/md5.o 00:04:41.664 CC lib/iscsi/param.o 00:04:41.664 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:41.664 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:41.664 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:41.922 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:41.922 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:41.922 CC lib/iscsi/portal_grp.o 00:04:41.922 CC lib/iscsi/tgt_node.o 00:04:41.922 CC lib/iscsi/iscsi_subsystem.o 00:04:41.922 CC lib/iscsi/iscsi_rpc.o 00:04:41.922 CC lib/iscsi/task.o 00:04:42.181 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:42.181 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:42.181 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:42.181 CC lib/ftl/utils/ftl_conf.o 00:04:42.439 CC lib/ftl/utils/ftl_md.o 00:04:42.439 CC lib/ftl/utils/ftl_mempool.o 00:04:42.439 CC lib/ftl/utils/ftl_bitmap.o 00:04:42.439 CC lib/ftl/utils/ftl_property.o 00:04:42.439 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:42.439 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:42.439 LIB libspdk_vhost.a 00:04:42.697 SO libspdk_vhost.so.8.0 00:04:42.697 LIB libspdk_iscsi.a 00:04:42.697 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:42.697 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:42.697 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:42.697 SYMLINK libspdk_vhost.so 00:04:42.698 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:42.698 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:42.698 SO libspdk_iscsi.so.8.0 00:04:42.698 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:42.698 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:42.698 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:42.698 CC lib/ftl/base/ftl_base_dev.o 00:04:42.956 CC lib/ftl/base/ftl_base_bdev.o 00:04:42.956 CC lib/ftl/ftl_trace.o 00:04:42.956 SYMLINK libspdk_iscsi.so 00:04:42.956 LIB libspdk_nvmf.a 00:04:43.213 LIB libspdk_ftl.a 00:04:43.213 SO libspdk_nvmf.so.18.0 00:04:43.472 SYMLINK libspdk_nvmf.so 00:04:43.472 SO libspdk_ftl.so.9.0 00:04:43.731 SYMLINK libspdk_ftl.so 00:04:43.989 CC module/env_dpdk/env_dpdk_rpc.o 00:04:43.989 CC module/accel/dsa/accel_dsa.o 00:04:43.989 CC module/keyring/file/keyring.o 00:04:43.989 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:43.989 CC module/blob/bdev/blob_bdev.o 00:04:43.989 CC module/accel/error/accel_error.o 00:04:43.989 CC module/sock/uring/uring.o 00:04:43.989 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:43.989 CC module/sock/posix/posix.o 00:04:43.989 CC module/accel/ioat/accel_ioat.o 00:04:44.266 LIB libspdk_env_dpdk_rpc.a 00:04:44.266 SO libspdk_env_dpdk_rpc.so.6.0 00:04:44.266 SYMLINK libspdk_env_dpdk_rpc.so 00:04:44.266 CC module/accel/error/accel_error_rpc.o 00:04:44.266 CC module/keyring/file/keyring_rpc.o 00:04:44.266 LIB libspdk_scheduler_dpdk_governor.a 00:04:44.266 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:44.266 LIB libspdk_scheduler_dynamic.a 00:04:44.266 CC module/accel/ioat/accel_ioat_rpc.o 00:04:44.266 CC module/accel/dsa/accel_dsa_rpc.o 00:04:44.266 SO libspdk_scheduler_dynamic.so.4.0 00:04:44.266 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:44.266 LIB libspdk_blob_bdev.a 00:04:44.266 LIB libspdk_accel_error.a 00:04:44.524 LIB libspdk_keyring_file.a 00:04:44.524 SO libspdk_blob_bdev.so.11.0 00:04:44.524 SYMLINK libspdk_scheduler_dynamic.so 00:04:44.524 SO libspdk_accel_error.so.2.0 00:04:44.524 SO libspdk_keyring_file.so.1.0 00:04:44.524 LIB libspdk_accel_ioat.a 00:04:44.524 CC module/accel/iaa/accel_iaa.o 00:04:44.524 SYMLINK libspdk_blob_bdev.so 00:04:44.524 CC module/accel/iaa/accel_iaa_rpc.o 00:04:44.524 SYMLINK libspdk_accel_error.so 00:04:44.524 LIB libspdk_accel_dsa.a 00:04:44.524 SYMLINK libspdk_keyring_file.so 00:04:44.524 SO libspdk_accel_ioat.so.6.0 00:04:44.524 SO libspdk_accel_dsa.so.5.0 00:04:44.524 CC module/scheduler/gscheduler/gscheduler.o 00:04:44.524 SYMLINK libspdk_accel_ioat.so 00:04:44.525 SYMLINK libspdk_accel_dsa.so 00:04:44.783 LIB libspdk_accel_iaa.a 00:04:44.783 LIB libspdk_scheduler_gscheduler.a 00:04:44.783 CC module/bdev/error/vbdev_error.o 00:04:44.783 CC module/bdev/delay/vbdev_delay.o 00:04:44.783 SO libspdk_accel_iaa.so.3.0 00:04:44.783 SO libspdk_scheduler_gscheduler.so.4.0 00:04:44.783 CC module/bdev/gpt/gpt.o 00:04:44.783 CC module/blobfs/bdev/blobfs_bdev.o 00:04:44.783 CC module/bdev/lvol/vbdev_lvol.o 00:04:44.783 SYMLINK libspdk_scheduler_gscheduler.so 00:04:44.783 CC module/bdev/malloc/bdev_malloc.o 00:04:44.783 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:44.783 SYMLINK libspdk_accel_iaa.so 00:04:44.783 LIB libspdk_sock_uring.a 00:04:44.783 LIB libspdk_sock_posix.a 00:04:44.783 SO libspdk_sock_uring.so.5.0 00:04:44.783 SO libspdk_sock_posix.so.6.0 00:04:45.042 SYMLINK libspdk_sock_uring.so 00:04:45.042 CC module/bdev/gpt/vbdev_gpt.o 00:04:45.042 SYMLINK libspdk_sock_posix.so 00:04:45.042 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:45.042 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:45.042 CC module/bdev/error/vbdev_error_rpc.o 00:04:45.042 CC module/bdev/null/bdev_null.o 00:04:45.042 CC module/bdev/nvme/bdev_nvme.o 00:04:45.042 LIB libspdk_bdev_delay.a 00:04:45.042 LIB libspdk_blobfs_bdev.a 00:04:45.042 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:45.042 LIB libspdk_bdev_error.a 00:04:45.301 SO libspdk_bdev_delay.so.6.0 00:04:45.301 SO libspdk_blobfs_bdev.so.6.0 00:04:45.301 SO libspdk_bdev_error.so.6.0 00:04:45.301 LIB libspdk_bdev_gpt.a 00:04:45.301 CC module/bdev/null/bdev_null_rpc.o 00:04:45.301 SYMLINK libspdk_bdev_delay.so 00:04:45.301 SYMLINK libspdk_blobfs_bdev.so 00:04:45.301 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:45.301 CC module/bdev/passthru/vbdev_passthru.o 00:04:45.301 SYMLINK libspdk_bdev_error.so 00:04:45.301 SO libspdk_bdev_gpt.so.6.0 00:04:45.301 CC module/bdev/raid/bdev_raid.o 00:04:45.301 LIB libspdk_bdev_lvol.a 00:04:45.301 LIB libspdk_bdev_malloc.a 00:04:45.301 SYMLINK libspdk_bdev_gpt.so 00:04:45.301 SO libspdk_bdev_lvol.so.6.0 00:04:45.301 SO libspdk_bdev_malloc.so.6.0 00:04:45.561 LIB libspdk_bdev_null.a 00:04:45.561 SYMLINK libspdk_bdev_lvol.so 00:04:45.561 CC module/bdev/split/vbdev_split.o 00:04:45.561 CC module/bdev/raid/bdev_raid_rpc.o 00:04:45.561 SYMLINK libspdk_bdev_malloc.so 00:04:45.561 CC module/bdev/raid/bdev_raid_sb.o 00:04:45.561 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:45.561 SO libspdk_bdev_null.so.6.0 00:04:45.561 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:45.561 CC module/bdev/uring/bdev_uring.o 00:04:45.561 SYMLINK libspdk_bdev_null.so 00:04:45.819 CC module/bdev/split/vbdev_split_rpc.o 00:04:45.819 CC module/bdev/aio/bdev_aio.o 00:04:45.819 LIB libspdk_bdev_passthru.a 00:04:45.819 SO libspdk_bdev_passthru.so.6.0 00:04:45.819 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:45.819 SYMLINK libspdk_bdev_passthru.so 00:04:45.819 CC module/bdev/nvme/nvme_rpc.o 00:04:45.819 LIB libspdk_bdev_split.a 00:04:45.819 CC module/bdev/ftl/bdev_ftl.o 00:04:46.078 SO libspdk_bdev_split.so.6.0 00:04:46.078 CC module/bdev/iscsi/bdev_iscsi.o 00:04:46.078 CC module/bdev/uring/bdev_uring_rpc.o 00:04:46.078 SYMLINK libspdk_bdev_split.so 00:04:46.078 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:46.078 LIB libspdk_bdev_zone_block.a 00:04:46.078 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:46.078 SO libspdk_bdev_zone_block.so.6.0 00:04:46.078 CC module/bdev/aio/bdev_aio_rpc.o 00:04:46.078 SYMLINK libspdk_bdev_zone_block.so 00:04:46.078 CC module/bdev/raid/raid0.o 00:04:46.078 CC module/bdev/nvme/bdev_mdns_client.o 00:04:46.078 LIB libspdk_bdev_uring.a 00:04:46.337 SO libspdk_bdev_uring.so.6.0 00:04:46.337 CC module/bdev/raid/raid1.o 00:04:46.337 LIB libspdk_bdev_ftl.a 00:04:46.337 CC module/bdev/nvme/vbdev_opal.o 00:04:46.337 SO libspdk_bdev_ftl.so.6.0 00:04:46.337 SYMLINK libspdk_bdev_uring.so 00:04:46.337 LIB libspdk_bdev_aio.a 00:04:46.337 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:46.337 SO libspdk_bdev_aio.so.6.0 00:04:46.337 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:46.337 SYMLINK libspdk_bdev_ftl.so 00:04:46.337 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:46.337 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:46.337 SYMLINK libspdk_bdev_aio.so 00:04:46.337 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:46.337 CC module/bdev/raid/concat.o 00:04:46.596 LIB libspdk_bdev_iscsi.a 00:04:46.596 SO libspdk_bdev_iscsi.so.6.0 00:04:46.596 SYMLINK libspdk_bdev_iscsi.so 00:04:46.596 LIB libspdk_bdev_virtio.a 00:04:46.596 LIB libspdk_bdev_raid.a 00:04:46.596 SO libspdk_bdev_virtio.so.6.0 00:04:46.596 SO libspdk_bdev_raid.so.6.0 00:04:46.855 SYMLINK libspdk_bdev_virtio.so 00:04:46.855 SYMLINK libspdk_bdev_raid.so 00:04:47.421 LIB libspdk_bdev_nvme.a 00:04:47.421 SO libspdk_bdev_nvme.so.7.0 00:04:47.421 SYMLINK libspdk_bdev_nvme.so 00:04:47.988 CC module/event/subsystems/keyring/keyring.o 00:04:47.988 CC module/event/subsystems/iobuf/iobuf.o 00:04:47.988 CC module/event/subsystems/scheduler/scheduler.o 00:04:47.988 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:47.988 CC module/event/subsystems/vmd/vmd.o 00:04:47.988 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:47.988 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:47.988 CC module/event/subsystems/sock/sock.o 00:04:48.246 LIB libspdk_event_scheduler.a 00:04:48.246 LIB libspdk_event_vhost_blk.a 00:04:48.246 LIB libspdk_event_iobuf.a 00:04:48.246 LIB libspdk_event_vmd.a 00:04:48.246 SO libspdk_event_scheduler.so.4.0 00:04:48.246 SO libspdk_event_vhost_blk.so.3.0 00:04:48.246 SO libspdk_event_iobuf.so.3.0 00:04:48.246 SO libspdk_event_vmd.so.6.0 00:04:48.246 LIB libspdk_event_sock.a 00:04:48.246 LIB libspdk_event_keyring.a 00:04:48.246 SYMLINK libspdk_event_vhost_blk.so 00:04:48.246 SYMLINK libspdk_event_scheduler.so 00:04:48.246 SO libspdk_event_keyring.so.1.0 00:04:48.246 SO libspdk_event_sock.so.5.0 00:04:48.246 SYMLINK libspdk_event_iobuf.so 00:04:48.246 SYMLINK libspdk_event_vmd.so 00:04:48.246 SYMLINK libspdk_event_keyring.so 00:04:48.247 SYMLINK libspdk_event_sock.so 00:04:48.505 CC module/event/subsystems/accel/accel.o 00:04:48.763 LIB libspdk_event_accel.a 00:04:48.763 SO libspdk_event_accel.so.6.0 00:04:48.763 SYMLINK libspdk_event_accel.so 00:04:49.021 CC module/event/subsystems/bdev/bdev.o 00:04:49.279 LIB libspdk_event_bdev.a 00:04:49.279 SO libspdk_event_bdev.so.6.0 00:04:49.279 SYMLINK libspdk_event_bdev.so 00:04:49.538 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:49.538 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:49.538 CC module/event/subsystems/ublk/ublk.o 00:04:49.538 CC module/event/subsystems/nbd/nbd.o 00:04:49.538 CC module/event/subsystems/scsi/scsi.o 00:04:49.796 LIB libspdk_event_nbd.a 00:04:49.796 LIB libspdk_event_ublk.a 00:04:49.796 LIB libspdk_event_scsi.a 00:04:49.796 SO libspdk_event_nbd.so.6.0 00:04:49.796 SO libspdk_event_ublk.so.3.0 00:04:49.796 SO libspdk_event_scsi.so.6.0 00:04:49.796 SYMLINK libspdk_event_nbd.so 00:04:49.796 SYMLINK libspdk_event_ublk.so 00:04:49.796 LIB libspdk_event_nvmf.a 00:04:49.796 SYMLINK libspdk_event_scsi.so 00:04:49.796 SO libspdk_event_nvmf.so.6.0 00:04:50.054 SYMLINK libspdk_event_nvmf.so 00:04:50.054 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:50.054 CC module/event/subsystems/iscsi/iscsi.o 00:04:50.313 LIB libspdk_event_vhost_scsi.a 00:04:50.313 LIB libspdk_event_iscsi.a 00:04:50.313 SO libspdk_event_vhost_scsi.so.3.0 00:04:50.313 SO libspdk_event_iscsi.so.6.0 00:04:50.313 SYMLINK libspdk_event_vhost_scsi.so 00:04:50.313 SYMLINK libspdk_event_iscsi.so 00:04:50.571 SO libspdk.so.6.0 00:04:50.571 SYMLINK libspdk.so 00:04:50.830 CXX app/trace/trace.o 00:04:50.830 CC app/trace_record/trace_record.o 00:04:50.830 CC examples/ioat/perf/perf.o 00:04:50.830 CC examples/sock/hello_world/hello_sock.o 00:04:50.830 CC examples/nvme/hello_world/hello_world.o 00:04:50.830 CC examples/accel/perf/accel_perf.o 00:04:51.088 CC examples/blob/hello_world/hello_blob.o 00:04:51.088 CC test/app/bdev_svc/bdev_svc.o 00:04:51.088 CC test/accel/dif/dif.o 00:04:51.088 CC examples/bdev/hello_world/hello_bdev.o 00:04:51.088 LINK spdk_trace_record 00:04:51.088 LINK ioat_perf 00:04:51.088 LINK hello_world 00:04:51.346 LINK hello_sock 00:04:51.346 LINK bdev_svc 00:04:51.346 LINK hello_blob 00:04:51.346 LINK spdk_trace 00:04:51.346 LINK hello_bdev 00:04:51.346 CC app/nvmf_tgt/nvmf_main.o 00:04:51.346 LINK accel_perf 00:04:51.346 LINK dif 00:04:51.346 CC examples/ioat/verify/verify.o 00:04:51.604 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.604 CC examples/nvme/reconnect/reconnect.o 00:04:51.604 CC examples/blob/cli/blobcli.o 00:04:51.604 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:51.604 LINK nvmf_tgt 00:04:51.604 CC examples/bdev/bdevperf/bdevperf.o 00:04:51.604 CC examples/vmd/lsvmd/lsvmd.o 00:04:51.605 LINK verify 00:04:51.863 CC examples/nvme/arbitration/arbitration.o 00:04:51.863 LINK lsvmd 00:04:51.863 CC test/bdev/bdevio/bdevio.o 00:04:51.863 LINK reconnect 00:04:51.863 CC examples/nvme/hotplug/hotplug.o 00:04:51.863 LINK nvme_manage 00:04:51.863 CC app/iscsi_tgt/iscsi_tgt.o 00:04:52.121 LINK nvme_fuzz 00:04:52.121 CC examples/vmd/led/led.o 00:04:52.121 LINK blobcli 00:04:52.121 LINK arbitration 00:04:52.121 CC app/spdk_tgt/spdk_tgt.o 00:04:52.121 LINK hotplug 00:04:52.121 LINK led 00:04:52.121 LINK iscsi_tgt 00:04:52.121 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.121 LINK bdevio 00:04:52.380 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:52.380 CC app/spdk_nvme_perf/perf.o 00:04:52.380 CC app/spdk_lspci/spdk_lspci.o 00:04:52.380 LINK bdevperf 00:04:52.380 LINK spdk_tgt 00:04:52.380 LINK cmb_copy 00:04:52.380 CC app/spdk_nvme_discover/discovery_aer.o 00:04:52.380 CC app/spdk_nvme_identify/identify.o 00:04:52.643 LINK spdk_lspci 00:04:52.643 CC app/spdk_top/spdk_top.o 00:04:52.643 LINK spdk_nvme_discover 00:04:52.643 CC examples/nvme/abort/abort.o 00:04:52.643 CC test/blobfs/mkfs/mkfs.o 00:04:52.643 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.914 CC examples/util/zipf/zipf.o 00:04:52.915 CC examples/nvmf/nvmf/nvmf.o 00:04:52.915 LINK mkfs 00:04:52.915 LINK pmr_persistence 00:04:53.173 LINK zipf 00:04:53.173 LINK abort 00:04:53.173 CC examples/thread/thread/thread_ex.o 00:04:53.173 LINK nvmf 00:04:53.173 LINK spdk_nvme_perf 00:04:53.173 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:53.173 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:53.432 LINK spdk_nvme_identify 00:04:53.432 CC examples/idxd/perf/perf.o 00:04:53.432 TEST_HEADER include/spdk/accel.h 00:04:53.432 TEST_HEADER include/spdk/accel_module.h 00:04:53.432 TEST_HEADER include/spdk/assert.h 00:04:53.432 TEST_HEADER include/spdk/barrier.h 00:04:53.432 TEST_HEADER include/spdk/base64.h 00:04:53.432 TEST_HEADER include/spdk/bdev.h 00:04:53.432 TEST_HEADER include/spdk/bdev_module.h 00:04:53.432 TEST_HEADER include/spdk/bdev_zone.h 00:04:53.432 LINK thread 00:04:53.432 TEST_HEADER include/spdk/bit_array.h 00:04:53.432 TEST_HEADER include/spdk/bit_pool.h 00:04:53.432 TEST_HEADER include/spdk/blob_bdev.h 00:04:53.432 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:53.432 TEST_HEADER include/spdk/blobfs.h 00:04:53.432 TEST_HEADER include/spdk/blob.h 00:04:53.432 TEST_HEADER include/spdk/conf.h 00:04:53.432 TEST_HEADER include/spdk/config.h 00:04:53.432 TEST_HEADER include/spdk/cpuset.h 00:04:53.432 TEST_HEADER include/spdk/crc16.h 00:04:53.432 TEST_HEADER include/spdk/crc32.h 00:04:53.432 TEST_HEADER include/spdk/crc64.h 00:04:53.432 TEST_HEADER include/spdk/dif.h 00:04:53.432 TEST_HEADER include/spdk/dma.h 00:04:53.432 LINK spdk_top 00:04:53.432 TEST_HEADER include/spdk/endian.h 00:04:53.432 TEST_HEADER include/spdk/env_dpdk.h 00:04:53.432 TEST_HEADER include/spdk/env.h 00:04:53.432 TEST_HEADER include/spdk/event.h 00:04:53.432 TEST_HEADER include/spdk/fd_group.h 00:04:53.432 TEST_HEADER include/spdk/fd.h 00:04:53.432 TEST_HEADER include/spdk/file.h 00:04:53.432 TEST_HEADER include/spdk/ftl.h 00:04:53.432 TEST_HEADER include/spdk/gpt_spec.h 00:04:53.432 TEST_HEADER include/spdk/hexlify.h 00:04:53.432 TEST_HEADER include/spdk/histogram_data.h 00:04:53.432 TEST_HEADER include/spdk/idxd.h 00:04:53.432 TEST_HEADER include/spdk/idxd_spec.h 00:04:53.432 TEST_HEADER include/spdk/init.h 00:04:53.432 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:53.432 TEST_HEADER include/spdk/ioat.h 00:04:53.432 TEST_HEADER include/spdk/ioat_spec.h 00:04:53.432 TEST_HEADER include/spdk/iscsi_spec.h 00:04:53.432 TEST_HEADER include/spdk/json.h 00:04:53.432 TEST_HEADER include/spdk/jsonrpc.h 00:04:53.432 TEST_HEADER include/spdk/keyring.h 00:04:53.432 TEST_HEADER include/spdk/keyring_module.h 00:04:53.432 TEST_HEADER include/spdk/likely.h 00:04:53.432 TEST_HEADER include/spdk/log.h 00:04:53.432 TEST_HEADER include/spdk/lvol.h 00:04:53.432 TEST_HEADER include/spdk/memory.h 00:04:53.432 TEST_HEADER include/spdk/mmio.h 00:04:53.432 TEST_HEADER include/spdk/nbd.h 00:04:53.432 TEST_HEADER include/spdk/notify.h 00:04:53.432 TEST_HEADER include/spdk/nvme.h 00:04:53.432 TEST_HEADER include/spdk/nvme_intel.h 00:04:53.432 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:53.432 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:53.432 TEST_HEADER include/spdk/nvme_spec.h 00:04:53.432 TEST_HEADER include/spdk/nvme_zns.h 00:04:53.432 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:53.432 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:53.432 TEST_HEADER include/spdk/nvmf.h 00:04:53.432 TEST_HEADER include/spdk/nvmf_spec.h 00:04:53.432 TEST_HEADER include/spdk/nvmf_transport.h 00:04:53.432 LINK interrupt_tgt 00:04:53.432 TEST_HEADER include/spdk/opal.h 00:04:53.432 TEST_HEADER include/spdk/opal_spec.h 00:04:53.432 TEST_HEADER include/spdk/pci_ids.h 00:04:53.432 TEST_HEADER include/spdk/pipe.h 00:04:53.432 TEST_HEADER include/spdk/queue.h 00:04:53.432 TEST_HEADER include/spdk/reduce.h 00:04:53.432 TEST_HEADER include/spdk/rpc.h 00:04:53.432 TEST_HEADER include/spdk/scheduler.h 00:04:53.432 TEST_HEADER include/spdk/scsi.h 00:04:53.432 TEST_HEADER include/spdk/scsi_spec.h 00:04:53.432 TEST_HEADER include/spdk/sock.h 00:04:53.432 TEST_HEADER include/spdk/stdinc.h 00:04:53.432 TEST_HEADER include/spdk/string.h 00:04:53.432 TEST_HEADER include/spdk/thread.h 00:04:53.432 TEST_HEADER include/spdk/trace.h 00:04:53.432 TEST_HEADER include/spdk/trace_parser.h 00:04:53.432 TEST_HEADER include/spdk/tree.h 00:04:53.432 CC test/app/histogram_perf/histogram_perf.o 00:04:53.432 TEST_HEADER include/spdk/ublk.h 00:04:53.691 TEST_HEADER include/spdk/util.h 00:04:53.691 TEST_HEADER include/spdk/uuid.h 00:04:53.691 TEST_HEADER include/spdk/version.h 00:04:53.691 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:53.691 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:53.691 TEST_HEADER include/spdk/vhost.h 00:04:53.691 TEST_HEADER include/spdk/vmd.h 00:04:53.691 TEST_HEADER include/spdk/xor.h 00:04:53.691 TEST_HEADER include/spdk/zipf.h 00:04:53.691 CXX test/cpp_headers/accel.o 00:04:53.691 CC app/vhost/vhost.o 00:04:53.691 CC app/spdk_dd/spdk_dd.o 00:04:53.691 LINK idxd_perf 00:04:53.691 LINK histogram_perf 00:04:53.691 CXX test/cpp_headers/accel_module.o 00:04:53.691 CC test/app/jsoncat/jsoncat.o 00:04:53.691 LINK vhost 00:04:53.950 CC app/fio/nvme/fio_plugin.o 00:04:53.950 LINK vhost_fuzz 00:04:53.950 LINK jsoncat 00:04:53.950 CXX test/cpp_headers/assert.o 00:04:53.950 LINK iscsi_fuzz 00:04:53.950 CC test/app/stub/stub.o 00:04:53.950 CC test/dma/test_dma/test_dma.o 00:04:53.950 CXX test/cpp_headers/barrier.o 00:04:54.208 LINK spdk_dd 00:04:54.208 CC test/env/mem_callbacks/mem_callbacks.o 00:04:54.208 LINK stub 00:04:54.208 CXX test/cpp_headers/base64.o 00:04:54.208 CC test/event/event_perf/event_perf.o 00:04:54.208 CC test/rpc_client/rpc_client_test.o 00:04:54.208 CC test/nvme/aer/aer.o 00:04:54.465 CC test/lvol/esnap/esnap.o 00:04:54.465 LINK spdk_nvme 00:04:54.465 LINK event_perf 00:04:54.465 LINK test_dma 00:04:54.465 CC app/fio/bdev/fio_plugin.o 00:04:54.465 CXX test/cpp_headers/bdev.o 00:04:54.465 CC test/thread/poller_perf/poller_perf.o 00:04:54.465 LINK rpc_client_test 00:04:54.723 LINK aer 00:04:54.723 CC test/nvme/reset/reset.o 00:04:54.723 LINK poller_perf 00:04:54.723 CC test/event/reactor/reactor.o 00:04:54.723 CXX test/cpp_headers/bdev_module.o 00:04:54.723 LINK mem_callbacks 00:04:54.981 CC test/event/reactor_perf/reactor_perf.o 00:04:54.981 CC test/event/app_repeat/app_repeat.o 00:04:54.981 CXX test/cpp_headers/bdev_zone.o 00:04:54.981 LINK reactor 00:04:54.981 LINK reset 00:04:54.981 CC test/env/vtophys/vtophys.o 00:04:54.981 LINK reactor_perf 00:04:54.981 CC test/event/scheduler/scheduler.o 00:04:54.981 LINK app_repeat 00:04:54.982 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:54.982 LINK spdk_bdev 00:04:54.982 CXX test/cpp_headers/bit_array.o 00:04:55.239 CC test/env/memory/memory_ut.o 00:04:55.239 LINK vtophys 00:04:55.239 CC test/nvme/sgl/sgl.o 00:04:55.239 LINK env_dpdk_post_init 00:04:55.239 CXX test/cpp_headers/bit_pool.o 00:04:55.239 CC test/nvme/e2edp/nvme_dp.o 00:04:55.239 LINK scheduler 00:04:55.239 CC test/nvme/overhead/overhead.o 00:04:55.239 CC test/env/pci/pci_ut.o 00:04:55.496 CC test/nvme/err_injection/err_injection.o 00:04:55.496 CXX test/cpp_headers/blob_bdev.o 00:04:55.496 CXX test/cpp_headers/blobfs_bdev.o 00:04:55.496 CC test/nvme/startup/startup.o 00:04:55.496 LINK sgl 00:04:55.496 LINK nvme_dp 00:04:55.496 LINK overhead 00:04:55.496 LINK err_injection 00:04:55.753 CXX test/cpp_headers/blobfs.o 00:04:55.753 CXX test/cpp_headers/blob.o 00:04:55.753 LINK startup 00:04:55.753 LINK pci_ut 00:04:55.753 CC test/nvme/reserve/reserve.o 00:04:55.753 CXX test/cpp_headers/conf.o 00:04:55.753 CC test/nvme/simple_copy/simple_copy.o 00:04:55.753 CXX test/cpp_headers/config.o 00:04:55.753 CXX test/cpp_headers/cpuset.o 00:04:55.753 CXX test/cpp_headers/crc16.o 00:04:55.753 CC test/nvme/connect_stress/connect_stress.o 00:04:56.012 CC test/nvme/boot_partition/boot_partition.o 00:04:56.012 CXX test/cpp_headers/crc32.o 00:04:56.012 CXX test/cpp_headers/crc64.o 00:04:56.012 LINK reserve 00:04:56.012 CXX test/cpp_headers/dif.o 00:04:56.012 LINK simple_copy 00:04:56.012 LINK connect_stress 00:04:56.012 LINK memory_ut 00:04:56.012 CXX test/cpp_headers/dma.o 00:04:56.012 CC test/nvme/compliance/nvme_compliance.o 00:04:56.012 LINK boot_partition 00:04:56.012 CXX test/cpp_headers/endian.o 00:04:56.270 CC test/nvme/fused_ordering/fused_ordering.o 00:04:56.270 CXX test/cpp_headers/env_dpdk.o 00:04:56.270 CXX test/cpp_headers/env.o 00:04:56.270 CXX test/cpp_headers/event.o 00:04:56.270 CXX test/cpp_headers/fd_group.o 00:04:56.270 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:56.270 CC test/nvme/fdp/fdp.o 00:04:56.270 CC test/nvme/cuse/cuse.o 00:04:56.270 CXX test/cpp_headers/fd.o 00:04:56.529 LINK fused_ordering 00:04:56.529 LINK nvme_compliance 00:04:56.529 CXX test/cpp_headers/file.o 00:04:56.529 CXX test/cpp_headers/ftl.o 00:04:56.529 CXX test/cpp_headers/gpt_spec.o 00:04:56.529 LINK doorbell_aers 00:04:56.529 CXX test/cpp_headers/hexlify.o 00:04:56.529 CXX test/cpp_headers/histogram_data.o 00:04:56.529 CXX test/cpp_headers/idxd.o 00:04:56.529 CXX test/cpp_headers/idxd_spec.o 00:04:56.529 CXX test/cpp_headers/init.o 00:04:56.529 LINK fdp 00:04:56.787 CXX test/cpp_headers/ioat.o 00:04:56.787 CXX test/cpp_headers/ioat_spec.o 00:04:56.787 CXX test/cpp_headers/iscsi_spec.o 00:04:56.787 CXX test/cpp_headers/json.o 00:04:56.787 CXX test/cpp_headers/jsonrpc.o 00:04:56.787 CXX test/cpp_headers/keyring.o 00:04:56.787 CXX test/cpp_headers/keyring_module.o 00:04:56.787 CXX test/cpp_headers/likely.o 00:04:56.787 CXX test/cpp_headers/log.o 00:04:56.787 CXX test/cpp_headers/lvol.o 00:04:56.787 CXX test/cpp_headers/memory.o 00:04:57.046 CXX test/cpp_headers/mmio.o 00:04:57.046 CXX test/cpp_headers/nbd.o 00:04:57.046 CXX test/cpp_headers/notify.o 00:04:57.046 CXX test/cpp_headers/nvme.o 00:04:57.046 CXX test/cpp_headers/nvme_intel.o 00:04:57.046 CXX test/cpp_headers/nvme_ocssd.o 00:04:57.046 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:57.046 CXX test/cpp_headers/nvme_spec.o 00:04:57.046 CXX test/cpp_headers/nvme_zns.o 00:04:57.046 CXX test/cpp_headers/nvmf_cmd.o 00:04:57.046 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:57.046 CXX test/cpp_headers/nvmf.o 00:04:57.046 CXX test/cpp_headers/nvmf_spec.o 00:04:57.305 CXX test/cpp_headers/nvmf_transport.o 00:04:57.305 CXX test/cpp_headers/opal.o 00:04:57.305 CXX test/cpp_headers/opal_spec.o 00:04:57.305 CXX test/cpp_headers/pci_ids.o 00:04:57.305 CXX test/cpp_headers/pipe.o 00:04:57.305 CXX test/cpp_headers/queue.o 00:04:57.305 CXX test/cpp_headers/reduce.o 00:04:57.305 CXX test/cpp_headers/rpc.o 00:04:57.305 CXX test/cpp_headers/scheduler.o 00:04:57.305 CXX test/cpp_headers/scsi.o 00:04:57.305 CXX test/cpp_headers/scsi_spec.o 00:04:57.305 LINK cuse 00:04:57.305 CXX test/cpp_headers/sock.o 00:04:57.563 CXX test/cpp_headers/stdinc.o 00:04:57.563 CXX test/cpp_headers/string.o 00:04:57.563 CXX test/cpp_headers/thread.o 00:04:57.563 CXX test/cpp_headers/trace.o 00:04:57.563 CXX test/cpp_headers/trace_parser.o 00:04:57.563 CXX test/cpp_headers/tree.o 00:04:57.563 CXX test/cpp_headers/ublk.o 00:04:57.563 CXX test/cpp_headers/util.o 00:04:57.563 CXX test/cpp_headers/uuid.o 00:04:57.563 CXX test/cpp_headers/version.o 00:04:57.563 CXX test/cpp_headers/vfio_user_pci.o 00:04:57.563 CXX test/cpp_headers/vfio_user_spec.o 00:04:57.563 CXX test/cpp_headers/vhost.o 00:04:57.563 CXX test/cpp_headers/vmd.o 00:04:57.822 CXX test/cpp_headers/xor.o 00:04:57.822 CXX test/cpp_headers/zipf.o 00:04:58.759 LINK esnap 00:04:59.327 00:04:59.327 real 0m55.586s 00:04:59.327 user 5m6.514s 00:04:59.327 sys 1m2.530s 00:04:59.327 02:49:38 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:04:59.327 02:49:38 -- common/autotest_common.sh@10 -- $ set +x 00:04:59.327 ************************************ 00:04:59.327 END TEST make 00:04:59.327 ************************************ 00:04:59.327 02:49:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:59.327 02:49:38 -- pm/common@30 -- $ signal_monitor_resources TERM 00:04:59.327 02:49:38 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:04:59.327 02:49:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.327 02:49:38 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:59.327 02:49:38 -- pm/common@45 -- $ pid=5932 00:04:59.327 02:49:38 -- pm/common@52 -- $ sudo kill -TERM 5932 00:04:59.327 02:49:38 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.327 02:49:38 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:59.327 02:49:38 -- pm/common@45 -- $ pid=5933 00:04:59.327 02:49:38 -- pm/common@52 -- $ sudo kill -TERM 5933 00:04:59.587 02:49:38 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:59.587 02:49:38 -- nvmf/common.sh@7 -- # uname -s 00:04:59.587 02:49:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:59.587 02:49:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:59.587 02:49:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:59.587 02:49:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:59.587 02:49:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:59.587 02:49:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:59.587 02:49:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:59.587 02:49:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:59.587 02:49:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:59.587 02:49:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:59.587 02:49:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:04:59.587 02:49:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:04:59.587 02:49:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:59.587 02:49:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:59.587 02:49:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:59.587 02:49:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:59.587 02:49:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:59.587 02:49:38 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:59.587 02:49:38 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:59.587 02:49:38 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:59.587 02:49:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.587 02:49:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.587 02:49:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.587 02:49:38 -- paths/export.sh@5 -- # export PATH 00:04:59.587 02:49:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:59.587 02:49:38 -- nvmf/common.sh@47 -- # : 0 00:04:59.587 02:49:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:59.587 02:49:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:59.587 02:49:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:59.587 02:49:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:59.587 02:49:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:59.587 02:49:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:59.587 02:49:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:59.587 02:49:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:59.587 02:49:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:59.587 02:49:38 -- spdk/autotest.sh@32 -- # uname -s 00:04:59.587 02:49:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:59.587 02:49:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:59.587 02:49:38 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.587 02:49:38 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:59.587 02:49:38 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:59.587 02:49:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:59.587 02:49:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:59.587 02:49:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:59.587 02:49:38 -- spdk/autotest.sh@48 -- # udevadm_pid=66126 00:04:59.587 02:49:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:59.587 02:49:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:59.587 02:49:38 -- pm/common@17 -- # local monitor 00:04:59.587 02:49:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.587 02:49:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=66128 00:04:59.587 02:49:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:59.587 02:49:38 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=66130 00:04:59.587 02:49:38 -- pm/common@26 -- # sleep 1 00:04:59.587 02:49:38 -- pm/common@21 -- # date +%s 00:04:59.587 02:49:38 -- pm/common@21 -- # date +%s 00:04:59.587 02:49:38 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713840578 00:04:59.587 02:49:38 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1713840578 00:04:59.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713840578_collect-vmstat.pm.log 00:04:59.587 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1713840578_collect-cpu-load.pm.log 00:05:00.524 02:49:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:00.524 02:49:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:00.524 02:49:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:00.524 02:49:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.524 02:49:39 -- spdk/autotest.sh@59 -- # create_test_list 00:05:00.524 02:49:39 -- common/autotest_common.sh@734 -- # xtrace_disable 00:05:00.524 02:49:39 -- common/autotest_common.sh@10 -- # set +x 00:05:00.524 02:49:39 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:00.524 02:49:39 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:00.524 02:49:39 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:00.524 02:49:39 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:00.524 02:49:39 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:00.524 02:49:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:00.524 02:49:39 -- common/autotest_common.sh@1441 -- # uname 00:05:00.524 02:49:39 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:05:00.524 02:49:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:00.524 02:49:39 -- common/autotest_common.sh@1461 -- # uname 00:05:00.524 02:49:39 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:05:00.524 02:49:39 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:05:00.524 02:49:39 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:05:00.524 02:49:39 -- spdk/autotest.sh@72 -- # hash lcov 00:05:00.524 02:49:39 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:05:00.524 02:49:39 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:05:00.524 --rc lcov_branch_coverage=1 00:05:00.524 --rc lcov_function_coverage=1 00:05:00.524 --rc genhtml_branch_coverage=1 00:05:00.524 --rc genhtml_function_coverage=1 00:05:00.524 --rc genhtml_legend=1 00:05:00.524 --rc geninfo_all_blocks=1 00:05:00.524 ' 00:05:00.524 02:49:39 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:05:00.524 --rc lcov_branch_coverage=1 00:05:00.524 --rc lcov_function_coverage=1 00:05:00.524 --rc genhtml_branch_coverage=1 00:05:00.524 --rc genhtml_function_coverage=1 00:05:00.524 --rc genhtml_legend=1 00:05:00.524 --rc geninfo_all_blocks=1 00:05:00.524 ' 00:05:00.524 02:49:39 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:05:00.524 --rc lcov_branch_coverage=1 00:05:00.524 --rc lcov_function_coverage=1 00:05:00.524 --rc genhtml_branch_coverage=1 00:05:00.524 --rc genhtml_function_coverage=1 00:05:00.524 --rc genhtml_legend=1 00:05:00.524 --rc geninfo_all_blocks=1 00:05:00.524 --no-external' 00:05:00.524 02:49:39 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:05:00.524 --rc lcov_branch_coverage=1 00:05:00.524 --rc lcov_function_coverage=1 00:05:00.524 --rc genhtml_branch_coverage=1 00:05:00.524 --rc genhtml_function_coverage=1 00:05:00.524 --rc genhtml_legend=1 00:05:00.524 --rc geninfo_all_blocks=1 00:05:00.524 --no-external' 00:05:00.524 02:49:39 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:05:00.796 lcov: LCOV version 1.14 00:05:00.796 02:49:39 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:08.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:05:08.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:05:08.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:05:08.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:08.919 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:05:08.919 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:14.197 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:14.197 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:26.444 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:26.444 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:26.444 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:26.444 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:26.444 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:26.445 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:26.445 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:26.446 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:26.446 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:27.823 02:50:06 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:05:27.823 02:50:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:27.823 02:50:06 -- common/autotest_common.sh@10 -- # set +x 00:05:27.823 02:50:06 -- spdk/autotest.sh@91 -- # rm -f 00:05:27.823 02:50:06 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:28.391 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:28.391 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:28.391 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:28.391 02:50:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:05:28.391 02:50:07 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:28.391 02:50:07 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:28.391 02:50:07 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:28.391 02:50:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.391 02:50:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:28.391 02:50:07 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:28.391 02:50:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.391 02:50:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:28.391 02:50:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:28.391 02:50:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.391 02:50:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:28.391 02:50:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:28.391 02:50:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:28.391 02:50:07 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:28.391 02:50:07 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:28.391 02:50:07 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:28.391 02:50:07 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:28.391 02:50:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:05:28.391 02:50:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.391 02:50:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:28.391 02:50:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:05:28.391 02:50:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:05:28.391 02:50:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:28.391 No valid GPT data, bailing 00:05:28.391 02:50:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:28.650 02:50:07 -- scripts/common.sh@391 -- # pt= 00:05:28.650 02:50:07 -- scripts/common.sh@392 -- # return 1 00:05:28.650 02:50:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:28.650 1+0 records in 00:05:28.650 1+0 records out 00:05:28.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0043837 s, 239 MB/s 00:05:28.650 02:50:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.650 02:50:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:28.650 02:50:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:05:28.650 02:50:07 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:05:28.650 02:50:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:28.650 No valid GPT data, bailing 00:05:28.650 02:50:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:28.650 02:50:07 -- scripts/common.sh@391 -- # pt= 00:05:28.650 02:50:07 -- scripts/common.sh@392 -- # return 1 00:05:28.650 02:50:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:28.650 1+0 records in 00:05:28.650 1+0 records out 00:05:28.650 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444217 s, 236 MB/s 00:05:28.650 02:50:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.650 02:50:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:28.650 02:50:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:05:28.650 02:50:07 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:05:28.650 02:50:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:28.650 No valid GPT data, bailing 00:05:28.650 02:50:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:28.650 02:50:07 -- scripts/common.sh@391 -- # pt= 00:05:28.650 02:50:07 -- scripts/common.sh@392 -- # return 1 00:05:28.650 02:50:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:28.650 1+0 records in 00:05:28.651 1+0 records out 00:05:28.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00335896 s, 312 MB/s 00:05:28.651 02:50:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:05:28.651 02:50:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:05:28.651 02:50:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:05:28.651 02:50:07 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:05:28.651 02:50:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:28.651 No valid GPT data, bailing 00:05:28.651 02:50:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:28.651 02:50:07 -- scripts/common.sh@391 -- # pt= 00:05:28.651 02:50:07 -- scripts/common.sh@392 -- # return 1 00:05:28.651 02:50:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:28.651 1+0 records in 00:05:28.651 1+0 records out 00:05:28.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00345013 s, 304 MB/s 00:05:28.651 02:50:07 -- spdk/autotest.sh@118 -- # sync 00:05:28.651 02:50:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:28.651 02:50:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:28.651 02:50:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:30.557 02:50:09 -- spdk/autotest.sh@124 -- # uname -s 00:05:30.557 02:50:09 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:05:30.557 02:50:09 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.557 02:50:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.557 02:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.557 02:50:09 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 ************************************ 00:05:30.816 START TEST setup.sh 00:05:30.816 ************************************ 00:05:30.816 02:50:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:30.816 * Looking for test storage... 00:05:30.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:30.816 02:50:09 -- setup/test-setup.sh@10 -- # uname -s 00:05:30.816 02:50:09 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:30.816 02:50:09 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:30.816 02:50:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:30.816 02:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:30.816 02:50:09 -- common/autotest_common.sh@10 -- # set +x 00:05:30.816 ************************************ 00:05:30.816 START TEST acl 00:05:30.816 ************************************ 00:05:30.816 02:50:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:31.075 * Looking for test storage... 00:05:31.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:31.075 02:50:09 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:31.075 02:50:09 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:31.075 02:50:09 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:31.075 02:50:09 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:31.075 02:50:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.075 02:50:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:31.075 02:50:09 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:31.075 02:50:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.075 02:50:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:31.075 02:50:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:31.075 02:50:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.075 02:50:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:05:31.075 02:50:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:05:31.075 02:50:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:31.075 02:50:09 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:05:31.075 02:50:09 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:05:31.075 02:50:09 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:31.075 02:50:09 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:31.075 02:50:09 -- setup/acl.sh@12 -- # devs=() 00:05:31.075 02:50:09 -- setup/acl.sh@12 -- # declare -a devs 00:05:31.075 02:50:09 -- setup/acl.sh@13 -- # drivers=() 00:05:31.075 02:50:09 -- setup/acl.sh@13 -- # declare -A drivers 00:05:31.075 02:50:09 -- setup/acl.sh@51 -- # setup reset 00:05:31.075 02:50:09 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:31.075 02:50:09 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.643 02:50:10 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:31.643 02:50:10 -- setup/acl.sh@16 -- # local dev driver 00:05:31.643 02:50:10 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:31.643 02:50:10 -- setup/acl.sh@15 -- # setup output status 00:05:31.643 02:50:10 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.643 02:50:10 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # continue 00:05:32.212 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.212 Hugepages 00:05:32.212 node hugesize free / total 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # continue 00:05:32.212 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.212 00:05:32.212 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:32.212 02:50:11 -- setup/acl.sh@19 -- # continue 00:05:32.212 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.471 02:50:11 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:32.471 02:50:11 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:32.471 02:50:11 -- setup/acl.sh@20 -- # continue 00:05:32.471 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.471 02:50:11 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:05:32.471 02:50:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.471 02:50:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:32.471 02:50:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.471 02:50:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.471 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.471 02:50:11 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:05:32.471 02:50:11 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:32.471 02:50:11 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:32.471 02:50:11 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:32.471 02:50:11 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:32.471 02:50:11 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:32.471 02:50:11 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:32.471 02:50:11 -- setup/acl.sh@54 -- # run_test denied denied 00:05:32.471 02:50:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:32.471 02:50:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:32.471 02:50:11 -- common/autotest_common.sh@10 -- # set +x 00:05:32.471 ************************************ 00:05:32.471 START TEST denied 00:05:32.471 ************************************ 00:05:32.471 02:50:11 -- common/autotest_common.sh@1111 -- # denied 00:05:32.471 02:50:11 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:05:32.471 02:50:11 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:05:32.471 02:50:11 -- setup/acl.sh@38 -- # setup output config 00:05:32.471 02:50:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.471 02:50:11 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.410 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:05:33.410 02:50:12 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:05:33.410 02:50:12 -- setup/acl.sh@28 -- # local dev driver 00:05:33.410 02:50:12 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:33.410 02:50:12 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:05:33.410 02:50:12 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:05:33.410 02:50:12 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:33.410 02:50:12 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:33.410 02:50:12 -- setup/acl.sh@41 -- # setup reset 00:05:33.410 02:50:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:33.410 02:50:12 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:33.977 00:05:33.977 real 0m1.368s 00:05:33.977 user 0m0.554s 00:05:33.977 sys 0m0.765s 00:05:33.977 02:50:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:33.977 02:50:12 -- common/autotest_common.sh@10 -- # set +x 00:05:33.977 ************************************ 00:05:33.977 END TEST denied 00:05:33.977 ************************************ 00:05:33.977 02:50:13 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:33.977 02:50:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.977 02:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.977 02:50:13 -- common/autotest_common.sh@10 -- # set +x 00:05:33.977 ************************************ 00:05:33.977 START TEST allowed 00:05:33.977 ************************************ 00:05:33.977 02:50:13 -- common/autotest_common.sh@1111 -- # allowed 00:05:33.977 02:50:13 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:05:33.977 02:50:13 -- setup/acl.sh@45 -- # setup output config 00:05:33.977 02:50:13 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.977 02:50:13 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.977 02:50:13 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:05:34.941 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.941 02:50:13 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:05:34.941 02:50:13 -- setup/acl.sh@28 -- # local dev driver 00:05:34.941 02:50:13 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:34.941 02:50:13 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:05:34.941 02:50:13 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:05:34.941 02:50:13 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:34.941 02:50:13 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:34.941 02:50:13 -- setup/acl.sh@48 -- # setup reset 00:05:34.941 02:50:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:34.941 02:50:13 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:35.508 00:05:35.508 real 0m1.434s 00:05:35.508 user 0m0.660s 00:05:35.508 sys 0m0.760s 00:05:35.508 02:50:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.508 02:50:14 -- common/autotest_common.sh@10 -- # set +x 00:05:35.508 ************************************ 00:05:35.508 END TEST allowed 00:05:35.508 ************************************ 00:05:35.508 00:05:35.508 real 0m4.645s 00:05:35.508 user 0m2.084s 00:05:35.508 sys 0m2.477s 00:05:35.508 02:50:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:35.508 02:50:14 -- common/autotest_common.sh@10 -- # set +x 00:05:35.508 ************************************ 00:05:35.508 END TEST acl 00:05:35.508 ************************************ 00:05:35.508 02:50:14 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:35.508 02:50:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.508 02:50:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.508 02:50:14 -- common/autotest_common.sh@10 -- # set +x 00:05:35.508 ************************************ 00:05:35.508 START TEST hugepages 00:05:35.508 ************************************ 00:05:35.508 02:50:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:35.768 * Looking for test storage... 00:05:35.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:35.768 02:50:14 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:35.768 02:50:14 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:35.768 02:50:14 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:35.768 02:50:14 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:35.768 02:50:14 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:35.768 02:50:14 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:35.768 02:50:14 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:35.768 02:50:14 -- setup/common.sh@18 -- # local node= 00:05:35.768 02:50:14 -- setup/common.sh@19 -- # local var val 00:05:35.768 02:50:14 -- setup/common.sh@20 -- # local mem_f mem 00:05:35.768 02:50:14 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:35.768 02:50:14 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:35.768 02:50:14 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:35.768 02:50:14 -- setup/common.sh@28 -- # mapfile -t mem 00:05:35.768 02:50:14 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 4161540 kB' 'MemAvailable: 7394412 kB' 'Buffers: 2436 kB' 'Cached: 3434864 kB' 'SwapCached: 0 kB' 'Active: 835892 kB' 'Inactive: 2708868 kB' 'Active(anon): 117972 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708868 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1356 kB' 'Writeback: 0 kB' 'AnonPages: 109144 kB' 'Mapped: 48876 kB' 'Shmem: 10492 kB' 'KReclaimable: 86044 kB' 'Slab: 166008 kB' 'SReclaimable: 86044 kB' 'SUnreclaim: 79964 kB' 'KernelStack: 6808 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 341132 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.768 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.768 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # continue 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # IFS=': ' 00:05:35.769 02:50:14 -- setup/common.sh@31 -- # read -r var val _ 00:05:35.769 02:50:14 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:35.769 02:50:14 -- setup/common.sh@33 -- # echo 2048 00:05:35.769 02:50:14 -- setup/common.sh@33 -- # return 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:35.769 02:50:14 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:35.769 02:50:14 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:35.769 02:50:14 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:35.769 02:50:14 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:35.769 02:50:14 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:35.769 02:50:14 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:35.769 02:50:14 -- setup/hugepages.sh@207 -- # get_nodes 00:05:35.769 02:50:14 -- setup/hugepages.sh@27 -- # local node 00:05:35.769 02:50:14 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:35.769 02:50:14 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:35.769 02:50:14 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:35.769 02:50:14 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:35.769 02:50:14 -- setup/hugepages.sh@208 -- # clear_hp 00:05:35.769 02:50:14 -- setup/hugepages.sh@37 -- # local node hp 00:05:35.769 02:50:14 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:35.769 02:50:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.769 02:50:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:35.769 02:50:14 -- setup/hugepages.sh@41 -- # echo 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:35.769 02:50:14 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:35.769 02:50:14 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:35.769 02:50:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:35.769 02:50:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:35.769 02:50:14 -- common/autotest_common.sh@10 -- # set +x 00:05:35.769 ************************************ 00:05:35.769 START TEST default_setup 00:05:35.769 ************************************ 00:05:35.769 02:50:14 -- common/autotest_common.sh@1111 -- # default_setup 00:05:35.769 02:50:14 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:35.769 02:50:14 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:35.769 02:50:14 -- setup/hugepages.sh@51 -- # shift 00:05:35.769 02:50:14 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:35.769 02:50:14 -- setup/hugepages.sh@52 -- # local node_ids 00:05:35.769 02:50:14 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:35.769 02:50:14 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:35.769 02:50:14 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:35.769 02:50:14 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:35.769 02:50:14 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:35.769 02:50:14 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:35.769 02:50:14 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:35.769 02:50:14 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:35.769 02:50:14 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:35.769 02:50:14 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:35.769 02:50:14 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:35.769 02:50:14 -- setup/hugepages.sh@73 -- # return 0 00:05:35.769 02:50:14 -- setup/hugepages.sh@137 -- # setup output 00:05:35.769 02:50:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:35.769 02:50:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.598 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.598 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.598 02:50:15 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:36.598 02:50:15 -- setup/hugepages.sh@89 -- # local node 00:05:36.598 02:50:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:36.598 02:50:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:36.598 02:50:15 -- setup/hugepages.sh@92 -- # local surp 00:05:36.598 02:50:15 -- setup/hugepages.sh@93 -- # local resv 00:05:36.598 02:50:15 -- setup/hugepages.sh@94 -- # local anon 00:05:36.598 02:50:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:36.598 02:50:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:36.598 02:50:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:36.598 02:50:15 -- setup/common.sh@18 -- # local node= 00:05:36.598 02:50:15 -- setup/common.sh@19 -- # local var val 00:05:36.598 02:50:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.598 02:50:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.598 02:50:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.598 02:50:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.598 02:50:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.598 02:50:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.598 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.598 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.598 02:50:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6257596 kB' 'MemAvailable: 9490348 kB' 'Buffers: 2436 kB' 'Cached: 3434892 kB' 'SwapCached: 0 kB' 'Active: 852324 kB' 'Inactive: 2708936 kB' 'Active(anon): 134404 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 125516 kB' 'Mapped: 48960 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165460 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79788 kB' 'KernelStack: 6784 kB' 'PageTables: 4632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:36.598 02:50:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.598 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.598 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.599 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.599 02:50:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:36.599 02:50:15 -- setup/common.sh@33 -- # echo 0 00:05:36.599 02:50:15 -- setup/common.sh@33 -- # return 0 00:05:36.599 02:50:15 -- setup/hugepages.sh@97 -- # anon=0 00:05:36.600 02:50:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:36.600 02:50:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.600 02:50:15 -- setup/common.sh@18 -- # local node= 00:05:36.600 02:50:15 -- setup/common.sh@19 -- # local var val 00:05:36.600 02:50:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.600 02:50:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.600 02:50:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.600 02:50:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.600 02:50:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.600 02:50:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6257508 kB' 'MemAvailable: 9490260 kB' 'Buffers: 2436 kB' 'Cached: 3434892 kB' 'SwapCached: 0 kB' 'Active: 852240 kB' 'Inactive: 2708936 kB' 'Active(anon): 134320 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 125412 kB' 'Mapped: 48900 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165452 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79780 kB' 'KernelStack: 6704 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.600 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.600 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.601 02:50:15 -- setup/common.sh@33 -- # echo 0 00:05:36.601 02:50:15 -- setup/common.sh@33 -- # return 0 00:05:36.601 02:50:15 -- setup/hugepages.sh@99 -- # surp=0 00:05:36.601 02:50:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:36.601 02:50:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:36.601 02:50:15 -- setup/common.sh@18 -- # local node= 00:05:36.601 02:50:15 -- setup/common.sh@19 -- # local var val 00:05:36.601 02:50:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.601 02:50:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.601 02:50:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.601 02:50:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.601 02:50:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.601 02:50:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6257508 kB' 'MemAvailable: 9490260 kB' 'Buffers: 2436 kB' 'Cached: 3434892 kB' 'SwapCached: 0 kB' 'Active: 852000 kB' 'Inactive: 2708936 kB' 'Active(anon): 134080 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 125228 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165448 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79776 kB' 'KernelStack: 6704 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 359676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.601 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.601 02:50:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.602 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:36.602 02:50:15 -- setup/common.sh@33 -- # echo 0 00:05:36.602 02:50:15 -- setup/common.sh@33 -- # return 0 00:05:36.602 02:50:15 -- setup/hugepages.sh@100 -- # resv=0 00:05:36.602 nr_hugepages=1024 00:05:36.602 02:50:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:36.602 resv_hugepages=0 00:05:36.602 02:50:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:36.602 surplus_hugepages=0 00:05:36.602 02:50:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:36.602 anon_hugepages=0 00:05:36.602 02:50:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:36.602 02:50:15 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.602 02:50:15 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:36.602 02:50:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:36.602 02:50:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:36.602 02:50:15 -- setup/common.sh@18 -- # local node= 00:05:36.602 02:50:15 -- setup/common.sh@19 -- # local var val 00:05:36.602 02:50:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.602 02:50:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.602 02:50:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:36.602 02:50:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:36.602 02:50:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.602 02:50:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.602 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6257508 kB' 'MemAvailable: 9490260 kB' 'Buffers: 2436 kB' 'Cached: 3434892 kB' 'SwapCached: 0 kB' 'Active: 851572 kB' 'Inactive: 2708936 kB' 'Active(anon): 133652 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708936 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'AnonPages: 124544 kB' 'Mapped: 48828 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165444 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79772 kB' 'KernelStack: 6688 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 357376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.603 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.603 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.864 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.864 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:36.865 02:50:15 -- setup/common.sh@33 -- # echo 1024 00:05:36.865 02:50:15 -- setup/common.sh@33 -- # return 0 00:05:36.865 02:50:15 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:36.865 02:50:15 -- setup/hugepages.sh@112 -- # get_nodes 00:05:36.865 02:50:15 -- setup/hugepages.sh@27 -- # local node 00:05:36.865 02:50:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:36.865 02:50:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:36.865 02:50:15 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:36.865 02:50:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:36.865 02:50:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:36.865 02:50:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:36.865 02:50:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:36.865 02:50:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:36.865 02:50:15 -- setup/common.sh@18 -- # local node=0 00:05:36.865 02:50:15 -- setup/common.sh@19 -- # local var val 00:05:36.865 02:50:15 -- setup/common.sh@20 -- # local mem_f mem 00:05:36.865 02:50:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:36.865 02:50:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:36.865 02:50:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:36.865 02:50:15 -- setup/common.sh@28 -- # mapfile -t mem 00:05:36.865 02:50:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6258028 kB' 'MemUsed: 5983952 kB' 'SwapCached: 0 kB' 'Active: 851700 kB' 'Inactive: 2708940 kB' 'Active(anon): 133780 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 260 kB' 'Writeback: 0 kB' 'FilePages: 3437332 kB' 'Mapped: 48828 kB' 'AnonPages: 124936 kB' 'Shmem: 10468 kB' 'KernelStack: 6720 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165444 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.865 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.865 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # continue 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # IFS=': ' 00:05:36.866 02:50:15 -- setup/common.sh@31 -- # read -r var val _ 00:05:36.866 02:50:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:36.866 02:50:15 -- setup/common.sh@33 -- # echo 0 00:05:36.866 02:50:15 -- setup/common.sh@33 -- # return 0 00:05:36.866 02:50:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:36.866 02:50:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:36.866 02:50:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:36.866 02:50:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:36.866 node0=1024 expecting 1024 00:05:36.866 ************************************ 00:05:36.866 END TEST default_setup 00:05:36.866 ************************************ 00:05:36.866 02:50:15 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:36.866 02:50:15 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:36.866 00:05:36.866 real 0m0.963s 00:05:36.866 user 0m0.448s 00:05:36.866 sys 0m0.457s 00:05:36.866 02:50:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:36.866 02:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:36.866 02:50:15 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:36.866 02:50:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.866 02:50:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.866 02:50:15 -- common/autotest_common.sh@10 -- # set +x 00:05:36.866 ************************************ 00:05:36.866 START TEST per_node_1G_alloc 00:05:36.866 ************************************ 00:05:36.866 02:50:15 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:05:36.866 02:50:15 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:36.866 02:50:15 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:36.866 02:50:15 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:36.866 02:50:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:36.866 02:50:15 -- setup/hugepages.sh@51 -- # shift 00:05:36.866 02:50:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:36.866 02:50:15 -- setup/hugepages.sh@52 -- # local node_ids 00:05:36.866 02:50:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:36.866 02:50:15 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:36.866 02:50:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:36.866 02:50:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:36.866 02:50:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:36.866 02:50:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:36.866 02:50:15 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:36.866 02:50:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:36.866 02:50:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:36.866 02:50:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:36.866 02:50:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:36.866 02:50:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:36.866 02:50:15 -- setup/hugepages.sh@73 -- # return 0 00:05:36.866 02:50:15 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:36.866 02:50:15 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:36.866 02:50:15 -- setup/hugepages.sh@146 -- # setup output 00:05:36.866 02:50:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:36.866 02:50:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.126 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.126 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.389 02:50:16 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:37.389 02:50:16 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:37.389 02:50:16 -- setup/hugepages.sh@89 -- # local node 00:05:37.389 02:50:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.389 02:50:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.389 02:50:16 -- setup/hugepages.sh@92 -- # local surp 00:05:37.389 02:50:16 -- setup/hugepages.sh@93 -- # local resv 00:05:37.389 02:50:16 -- setup/hugepages.sh@94 -- # local anon 00:05:37.389 02:50:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.389 02:50:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.389 02:50:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.389 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.389 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.389 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.389 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.389 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.389 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.389 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.389 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7309804 kB' 'MemAvailable: 10542564 kB' 'Buffers: 2436 kB' 'Cached: 3434896 kB' 'SwapCached: 0 kB' 'Active: 851580 kB' 'Inactive: 2708944 kB' 'Active(anon): 133660 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 124492 kB' 'Mapped: 49008 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165448 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79776 kB' 'KernelStack: 6724 kB' 'PageTables: 4488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.389 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.389 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.390 02:50:16 -- setup/common.sh@33 -- # echo 0 00:05:37.390 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.390 02:50:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.390 02:50:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.390 02:50:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.390 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.390 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.390 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.390 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.390 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.390 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.390 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.390 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7309804 kB' 'MemAvailable: 10542564 kB' 'Buffers: 2436 kB' 'Cached: 3434896 kB' 'SwapCached: 0 kB' 'Active: 850908 kB' 'Inactive: 2708944 kB' 'Active(anon): 132988 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 124092 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6688 kB' 'PageTables: 4324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.390 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.390 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.391 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.391 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.392 02:50:16 -- setup/common.sh@33 -- # echo 0 00:05:37.392 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.392 02:50:16 -- setup/hugepages.sh@99 -- # surp=0 00:05:37.392 02:50:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.392 02:50:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.392 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.392 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.392 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.392 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.392 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.392 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.392 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.392 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7309804 kB' 'MemAvailable: 10542564 kB' 'Buffers: 2436 kB' 'Cached: 3434896 kB' 'SwapCached: 0 kB' 'Active: 851248 kB' 'Inactive: 2708944 kB' 'Active(anon): 133328 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 124528 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6736 kB' 'PageTables: 4480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.392 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.392 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.393 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.393 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.393 02:50:16 -- setup/common.sh@33 -- # echo 0 00:05:37.393 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.393 nr_hugepages=512 00:05:37.393 02:50:16 -- setup/hugepages.sh@100 -- # resv=0 00:05:37.393 02:50:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:37.393 resv_hugepages=0 00:05:37.393 surplus_hugepages=0 00:05:37.393 anon_hugepages=0 00:05:37.393 02:50:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.393 02:50:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.393 02:50:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.393 02:50:16 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.393 02:50:16 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:37.394 02:50:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.394 02:50:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.394 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.394 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.394 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.394 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.394 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.394 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.394 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.394 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7309804 kB' 'MemAvailable: 10542560 kB' 'Buffers: 2436 kB' 'Cached: 3434892 kB' 'SwapCached: 0 kB' 'Active: 850672 kB' 'Inactive: 2708940 kB' 'Active(anon): 132752 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708940 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'AnonPages: 123856 kB' 'Mapped: 48840 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6656 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.394 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.394 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.395 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.395 02:50:16 -- setup/common.sh@33 -- # echo 512 00:05:37.395 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.395 02:50:16 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:37.395 02:50:16 -- setup/hugepages.sh@112 -- # get_nodes 00:05:37.395 02:50:16 -- setup/hugepages.sh@27 -- # local node 00:05:37.395 02:50:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:37.395 02:50:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:37.395 02:50:16 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:37.395 02:50:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:37.395 02:50:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:37.395 02:50:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:37.395 02:50:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:37.395 02:50:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.395 02:50:16 -- setup/common.sh@18 -- # local node=0 00:05:37.395 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.395 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.395 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.395 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:37.395 02:50:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:37.395 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.395 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.395 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7309804 kB' 'MemUsed: 4932176 kB' 'SwapCached: 0 kB' 'Active: 850976 kB' 'Inactive: 2708944 kB' 'Active(anon): 133056 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708944 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 680 kB' 'Writeback: 0 kB' 'FilePages: 3437332 kB' 'Mapped: 48840 kB' 'AnonPages: 124196 kB' 'Shmem: 10468 kB' 'KernelStack: 6704 kB' 'PageTables: 4384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.396 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.396 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.397 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.397 02:50:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.397 02:50:16 -- setup/common.sh@33 -- # echo 0 00:05:37.397 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.397 node0=512 expecting 512 00:05:37.397 ************************************ 00:05:37.397 END TEST per_node_1G_alloc 00:05:37.397 ************************************ 00:05:37.397 02:50:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:37.397 02:50:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:37.397 02:50:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:37.397 02:50:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:37.397 02:50:16 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:37.397 02:50:16 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:37.397 00:05:37.397 real 0m0.547s 00:05:37.397 user 0m0.259s 00:05:37.397 sys 0m0.288s 00:05:37.397 02:50:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:37.397 02:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.397 02:50:16 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:37.397 02:50:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:37.397 02:50:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:37.397 02:50:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.656 ************************************ 00:05:37.656 START TEST even_2G_alloc 00:05:37.656 ************************************ 00:05:37.656 02:50:16 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:05:37.656 02:50:16 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:37.656 02:50:16 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:37.656 02:50:16 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:37.656 02:50:16 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:37.656 02:50:16 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:37.656 02:50:16 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:37.656 02:50:16 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:37.656 02:50:16 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:37.656 02:50:16 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:37.656 02:50:16 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:37.656 02:50:16 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:37.656 02:50:16 -- setup/hugepages.sh@83 -- # : 0 00:05:37.656 02:50:16 -- setup/hugepages.sh@84 -- # : 0 00:05:37.656 02:50:16 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:37.656 02:50:16 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:37.656 02:50:16 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:37.656 02:50:16 -- setup/hugepages.sh@153 -- # setup output 00:05:37.656 02:50:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:37.656 02:50:16 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.918 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.918 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.918 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:37.918 02:50:16 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:37.918 02:50:16 -- setup/hugepages.sh@89 -- # local node 00:05:37.918 02:50:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:37.918 02:50:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:37.918 02:50:16 -- setup/hugepages.sh@92 -- # local surp 00:05:37.918 02:50:16 -- setup/hugepages.sh@93 -- # local resv 00:05:37.918 02:50:16 -- setup/hugepages.sh@94 -- # local anon 00:05:37.918 02:50:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:37.918 02:50:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:37.918 02:50:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:37.918 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.918 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.918 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.918 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.918 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.918 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.918 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.918 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259296 kB' 'MemAvailable: 9492060 kB' 'Buffers: 2436 kB' 'Cached: 3434900 kB' 'SwapCached: 0 kB' 'Active: 851640 kB' 'Inactive: 2708948 kB' 'Active(anon): 133720 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 124588 kB' 'Mapped: 49208 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165448 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79776 kB' 'KernelStack: 6728 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.918 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.918 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:37.919 02:50:16 -- setup/common.sh@33 -- # echo 0 00:05:37.919 02:50:16 -- setup/common.sh@33 -- # return 0 00:05:37.919 02:50:16 -- setup/hugepages.sh@97 -- # anon=0 00:05:37.919 02:50:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:37.919 02:50:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:37.919 02:50:16 -- setup/common.sh@18 -- # local node= 00:05:37.919 02:50:16 -- setup/common.sh@19 -- # local var val 00:05:37.919 02:50:16 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.919 02:50:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.919 02:50:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.919 02:50:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.919 02:50:16 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.919 02:50:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259044 kB' 'MemAvailable: 9491808 kB' 'Buffers: 2436 kB' 'Cached: 3434900 kB' 'SwapCached: 0 kB' 'Active: 851028 kB' 'Inactive: 2708948 kB' 'Active(anon): 133108 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 124460 kB' 'Mapped: 49080 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6736 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.919 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.919 02:50:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:16 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:16 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.920 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.920 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:37.921 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:37.921 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:37.921 02:50:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:37.921 02:50:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:37.921 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:37.921 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:37.921 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:37.921 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.921 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.921 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.921 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.921 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.921 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259044 kB' 'MemAvailable: 9491808 kB' 'Buffers: 2436 kB' 'Cached: 3434900 kB' 'SwapCached: 0 kB' 'Active: 851044 kB' 'Inactive: 2708948 kB' 'Active(anon): 133124 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 124216 kB' 'Mapped: 49080 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6736 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.921 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.921 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:37.922 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:37.922 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:37.922 02:50:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:37.922 nr_hugepages=1024 00:05:37.922 02:50:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:37.922 resv_hugepages=0 00:05:37.922 02:50:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:37.922 surplus_hugepages=0 00:05:37.922 02:50:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:37.922 anon_hugepages=0 00:05:37.922 02:50:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:37.922 02:50:17 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:37.922 02:50:17 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:37.922 02:50:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:37.922 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:37.922 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:37.922 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:37.922 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:37.922 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:37.922 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:37.922 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:37.922 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:37.922 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259044 kB' 'MemAvailable: 9491808 kB' 'Buffers: 2436 kB' 'Cached: 3434900 kB' 'SwapCached: 0 kB' 'Active: 850992 kB' 'Inactive: 2708948 kB' 'Active(anon): 133072 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'AnonPages: 124212 kB' 'Mapped: 49080 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165440 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79768 kB' 'KernelStack: 6704 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.922 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.922 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:37.923 02:50:17 -- setup/common.sh@32 -- # continue 00:05:37.923 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.184 02:50:17 -- setup/common.sh@33 -- # echo 1024 00:05:38.184 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.184 02:50:17 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:38.184 02:50:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.184 02:50:17 -- setup/hugepages.sh@27 -- # local node 00:05:38.184 02:50:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.184 02:50:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:38.184 02:50:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.184 02:50:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.184 02:50:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.184 02:50:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.184 02:50:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.184 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.184 02:50:17 -- setup/common.sh@18 -- # local node=0 00:05:38.184 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.184 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.184 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.184 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.184 02:50:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.184 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.184 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259044 kB' 'MemUsed: 5982936 kB' 'SwapCached: 0 kB' 'Active: 850760 kB' 'Inactive: 2708948 kB' 'Active(anon): 132840 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2708948 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 856 kB' 'Writeback: 0 kB' 'FilePages: 3437336 kB' 'Mapped: 48848 kB' 'AnonPages: 124012 kB' 'Shmem: 10468 kB' 'KernelStack: 6704 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165452 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.184 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.184 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.184 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:38.184 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.184 02:50:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.184 02:50:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.184 02:50:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.184 02:50:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.184 node0=1024 expecting 1024 00:05:38.184 02:50:17 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:38.184 02:50:17 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:38.184 00:05:38.184 real 0m0.512s 00:05:38.184 user 0m0.255s 00:05:38.184 sys 0m0.282s 00:05:38.184 02:50:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.184 02:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.184 ************************************ 00:05:38.184 END TEST even_2G_alloc 00:05:38.184 ************************************ 00:05:38.184 02:50:17 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:38.184 02:50:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.184 02:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.184 02:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.184 ************************************ 00:05:38.184 START TEST odd_alloc 00:05:38.185 ************************************ 00:05:38.185 02:50:17 -- common/autotest_common.sh@1111 -- # odd_alloc 00:05:38.185 02:50:17 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:38.185 02:50:17 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:38.185 02:50:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:38.185 02:50:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.185 02:50:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.185 02:50:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.185 02:50:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:38.185 02:50:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.185 02:50:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.185 02:50:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.185 02:50:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:38.185 02:50:17 -- setup/hugepages.sh@83 -- # : 0 00:05:38.185 02:50:17 -- setup/hugepages.sh@84 -- # : 0 00:05:38.185 02:50:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.185 02:50:17 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:38.185 02:50:17 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:38.185 02:50:17 -- setup/hugepages.sh@160 -- # setup output 00:05:38.185 02:50:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.185 02:50:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.449 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.449 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.449 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:38.450 02:50:17 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:38.450 02:50:17 -- setup/hugepages.sh@89 -- # local node 00:05:38.450 02:50:17 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:38.450 02:50:17 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:38.450 02:50:17 -- setup/hugepages.sh@92 -- # local surp 00:05:38.450 02:50:17 -- setup/hugepages.sh@93 -- # local resv 00:05:38.450 02:50:17 -- setup/hugepages.sh@94 -- # local anon 00:05:38.450 02:50:17 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:38.450 02:50:17 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:38.450 02:50:17 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:38.450 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:38.450 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.450 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.450 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.450 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.450 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.450 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.450 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259652 kB' 'MemAvailable: 9492468 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 851412 kB' 'Inactive: 2709000 kB' 'Active(anon): 133492 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124644 kB' 'Mapped: 48984 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165416 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79744 kB' 'KernelStack: 6724 kB' 'PageTables: 4520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55124 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:38.450 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:38.450 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.450 02:50:17 -- setup/hugepages.sh@97 -- # anon=0 00:05:38.450 02:50:17 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:38.450 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.450 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:38.450 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.450 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.450 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.450 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.450 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.450 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.450 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.450 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.450 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259652 kB' 'MemAvailable: 9492468 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 851060 kB' 'Inactive: 2709000 kB' 'Active(anon): 133140 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124316 kB' 'Mapped: 48968 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165420 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6628 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:38.450 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.451 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.451 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.714 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.714 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.715 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.715 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.715 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:38.715 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.715 02:50:17 -- setup/hugepages.sh@99 -- # surp=0 00:05:38.715 02:50:17 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:38.715 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:38.715 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:38.715 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.715 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.715 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.716 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.716 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.716 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.716 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259652 kB' 'MemAvailable: 9492468 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 851032 kB' 'Inactive: 2709000 kB' 'Active(anon): 133112 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124276 kB' 'Mapped: 48860 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165424 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79752 kB' 'KernelStack: 6704 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.716 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.716 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:38.717 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:38.717 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.717 02:50:17 -- setup/hugepages.sh@100 -- # resv=0 00:05:38.717 nr_hugepages=1025 00:05:38.717 02:50:17 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:38.717 resv_hugepages=0 00:05:38.717 02:50:17 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:38.717 surplus_hugepages=0 00:05:38.717 02:50:17 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:38.717 anon_hugepages=0 00:05:38.717 02:50:17 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:38.717 02:50:17 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:38.717 02:50:17 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:38.717 02:50:17 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:38.717 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:38.717 02:50:17 -- setup/common.sh@18 -- # local node= 00:05:38.717 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.717 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.717 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.717 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:38.717 02:50:17 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:38.717 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.717 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259652 kB' 'MemAvailable: 9492468 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 851008 kB' 'Inactive: 2709000 kB' 'Active(anon): 133088 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'AnonPages: 124244 kB' 'Mapped: 48860 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165424 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79752 kB' 'KernelStack: 6688 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55060 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.717 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.717 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:38.718 02:50:17 -- setup/common.sh@33 -- # echo 1025 00:05:38.718 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.718 02:50:17 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:38.718 02:50:17 -- setup/hugepages.sh@112 -- # get_nodes 00:05:38.718 02:50:17 -- setup/hugepages.sh@27 -- # local node 00:05:38.718 02:50:17 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:38.718 02:50:17 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:38.718 02:50:17 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:38.718 02:50:17 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:38.718 02:50:17 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:38.718 02:50:17 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:38.718 02:50:17 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:38.718 02:50:17 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:38.718 02:50:17 -- setup/common.sh@18 -- # local node=0 00:05:38.718 02:50:17 -- setup/common.sh@19 -- # local var val 00:05:38.718 02:50:17 -- setup/common.sh@20 -- # local mem_f mem 00:05:38.718 02:50:17 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:38.718 02:50:17 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:38.718 02:50:17 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:38.718 02:50:17 -- setup/common.sh@28 -- # mapfile -t mem 00:05:38.718 02:50:17 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.718 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.718 02:50:17 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6259652 kB' 'MemUsed: 5982328 kB' 'SwapCached: 0 kB' 'Active: 851000 kB' 'Inactive: 2709000 kB' 'Active(anon): 133080 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1032 kB' 'Writeback: 0 kB' 'FilePages: 3437388 kB' 'Mapped: 48860 kB' 'AnonPages: 124208 kB' 'Shmem: 10468 kB' 'KernelStack: 6672 kB' 'PageTables: 4296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165424 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79752 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.719 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.719 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.720 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.720 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.720 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.720 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.720 02:50:17 -- setup/common.sh@32 -- # continue 00:05:38.720 02:50:17 -- setup/common.sh@31 -- # IFS=': ' 00:05:38.720 02:50:17 -- setup/common.sh@31 -- # read -r var val _ 00:05:38.720 02:50:17 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:38.720 02:50:17 -- setup/common.sh@33 -- # echo 0 00:05:38.720 02:50:17 -- setup/common.sh@33 -- # return 0 00:05:38.720 02:50:17 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:38.720 02:50:17 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:38.720 02:50:17 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:38.720 node0=1025 expecting 1025 00:05:38.720 02:50:17 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:38.720 02:50:17 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:38.720 00:05:38.720 real 0m0.500s 00:05:38.720 user 0m0.279s 00:05:38.720 sys 0m0.257s 00:05:38.720 02:50:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:38.720 02:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.720 ************************************ 00:05:38.720 END TEST odd_alloc 00:05:38.720 ************************************ 00:05:38.720 02:50:17 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:38.720 02:50:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:38.720 02:50:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:38.720 02:50:17 -- common/autotest_common.sh@10 -- # set +x 00:05:38.720 ************************************ 00:05:38.720 START TEST custom_alloc 00:05:38.720 ************************************ 00:05:38.720 02:50:17 -- common/autotest_common.sh@1111 -- # custom_alloc 00:05:38.720 02:50:17 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:38.720 02:50:17 -- setup/hugepages.sh@169 -- # local node 00:05:38.720 02:50:17 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:38.720 02:50:17 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:38.720 02:50:17 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:38.720 02:50:17 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:38.720 02:50:17 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:38.720 02:50:17 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:38.720 02:50:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.720 02:50:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.720 02:50:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.720 02:50:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.720 02:50:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.720 02:50:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@83 -- # : 0 00:05:38.720 02:50:17 -- setup/hugepages.sh@84 -- # : 0 00:05:38.720 02:50:17 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:38.720 02:50:17 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:38.720 02:50:17 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:38.720 02:50:17 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:38.720 02:50:17 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:38.720 02:50:17 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:38.720 02:50:17 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:38.720 02:50:17 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:38.720 02:50:17 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:38.720 02:50:17 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:38.720 02:50:17 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:38.720 02:50:17 -- setup/hugepages.sh@78 -- # return 0 00:05:38.720 02:50:17 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:38.720 02:50:17 -- setup/hugepages.sh@187 -- # setup output 00:05:38.720 02:50:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.720 02:50:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.291 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.291 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.291 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.291 02:50:18 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:39.291 02:50:18 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:39.291 02:50:18 -- setup/hugepages.sh@89 -- # local node 00:05:39.291 02:50:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.291 02:50:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.291 02:50:18 -- setup/hugepages.sh@92 -- # local surp 00:05:39.291 02:50:18 -- setup/hugepages.sh@93 -- # local resv 00:05:39.291 02:50:18 -- setup/hugepages.sh@94 -- # local anon 00:05:39.291 02:50:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.291 02:50:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.291 02:50:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.291 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.291 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.291 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.291 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.291 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.291 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.291 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.291 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7310560 kB' 'MemAvailable: 10543376 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 851772 kB' 'Inactive: 2709000 kB' 'Active(anon): 133852 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1196 kB' 'Writeback: 0 kB' 'AnonPages: 124964 kB' 'Mapped: 48932 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165420 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6724 kB' 'PageTables: 4516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55076 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.291 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.291 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.292 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.292 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.292 02:50:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.292 02:50:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.292 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.292 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.292 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.292 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.292 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.292 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.292 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.292 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.292 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7310312 kB' 'MemAvailable: 10543128 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 850984 kB' 'Inactive: 2709000 kB' 'Active(anon): 133064 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1196 kB' 'Writeback: 0 kB' 'AnonPages: 124440 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165420 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6720 kB' 'PageTables: 4432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.292 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.292 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.293 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.293 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.294 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.294 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.294 02:50:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.294 02:50:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.294 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.294 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.294 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.294 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.294 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.294 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.294 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.294 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.294 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7310312 kB' 'MemAvailable: 10543128 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 850912 kB' 'Inactive: 2709000 kB' 'Active(anon): 132992 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1196 kB' 'Writeback: 0 kB' 'AnonPages: 124376 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165408 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79736 kB' 'KernelStack: 6688 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.294 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.294 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.295 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.295 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.295 02:50:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.295 nr_hugepages=512 00:05:39.295 02:50:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:39.295 resv_hugepages=0 00:05:39.295 02:50:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.295 surplus_hugepages=0 00:05:39.295 02:50:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.295 anon_hugepages=0 00:05:39.295 02:50:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.295 02:50:18 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.295 02:50:18 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:39.295 02:50:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.295 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.295 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.295 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.295 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.295 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.295 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.295 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.295 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.295 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7310312 kB' 'MemAvailable: 10543128 kB' 'Buffers: 2436 kB' 'Cached: 3434952 kB' 'SwapCached: 0 kB' 'Active: 850944 kB' 'Inactive: 2709000 kB' 'Active(anon): 133024 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1196 kB' 'Writeback: 0 kB' 'AnonPages: 124356 kB' 'Mapped: 48868 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165408 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79736 kB' 'KernelStack: 6656 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.295 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.295 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.296 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.296 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.297 02:50:18 -- setup/common.sh@33 -- # echo 512 00:05:39.297 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.297 02:50:18 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:39.297 02:50:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.297 02:50:18 -- setup/hugepages.sh@27 -- # local node 00:05:39.297 02:50:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.297 02:50:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:39.297 02:50:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.297 02:50:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.297 02:50:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.297 02:50:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.297 02:50:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.297 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.297 02:50:18 -- setup/common.sh@18 -- # local node=0 00:05:39.297 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.297 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.297 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.297 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.297 02:50:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.297 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.297 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7310312 kB' 'MemUsed: 4931668 kB' 'SwapCached: 0 kB' 'Active: 850940 kB' 'Inactive: 2709000 kB' 'Active(anon): 133020 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709000 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1196 kB' 'Writeback: 0 kB' 'FilePages: 3437388 kB' 'Mapped: 48868 kB' 'AnonPages: 124096 kB' 'Shmem: 10468 kB' 'KernelStack: 6672 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165408 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.297 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.297 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.298 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.298 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.298 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.298 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.298 02:50:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:39.298 02:50:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:39.298 02:50:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:39.298 02:50:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:39.298 node0=512 expecting 512 00:05:39.298 02:50:18 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:39.298 02:50:18 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:39.298 00:05:39.298 real 0m0.520s 00:05:39.298 user 0m0.266s 00:05:39.298 sys 0m0.289s 00:05:39.298 02:50:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:39.298 02:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:39.298 ************************************ 00:05:39.298 END TEST custom_alloc 00:05:39.298 ************************************ 00:05:39.298 02:50:18 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:39.298 02:50:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.298 02:50:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.298 02:50:18 -- common/autotest_common.sh@10 -- # set +x 00:05:39.557 ************************************ 00:05:39.557 START TEST no_shrink_alloc 00:05:39.557 ************************************ 00:05:39.557 02:50:18 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:05:39.557 02:50:18 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:39.557 02:50:18 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:39.557 02:50:18 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:39.557 02:50:18 -- setup/hugepages.sh@51 -- # shift 00:05:39.557 02:50:18 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:39.557 02:50:18 -- setup/hugepages.sh@52 -- # local node_ids 00:05:39.557 02:50:18 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:39.557 02:50:18 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:39.557 02:50:18 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:39.557 02:50:18 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:39.557 02:50:18 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:39.557 02:50:18 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:39.557 02:50:18 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:39.557 02:50:18 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:39.557 02:50:18 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:39.557 02:50:18 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:39.557 02:50:18 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:39.558 02:50:18 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:39.558 02:50:18 -- setup/hugepages.sh@73 -- # return 0 00:05:39.558 02:50:18 -- setup/hugepages.sh@198 -- # setup output 00:05:39.558 02:50:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:39.558 02:50:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:39.819 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:39.819 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.819 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:39.819 02:50:18 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:39.819 02:50:18 -- setup/hugepages.sh@89 -- # local node 00:05:39.819 02:50:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:39.819 02:50:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:39.819 02:50:18 -- setup/hugepages.sh@92 -- # local surp 00:05:39.819 02:50:18 -- setup/hugepages.sh@93 -- # local resv 00:05:39.819 02:50:18 -- setup/hugepages.sh@94 -- # local anon 00:05:39.819 02:50:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:39.819 02:50:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:39.819 02:50:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:39.819 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.819 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.819 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.819 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.819 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.819 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.819 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.819 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.819 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6260508 kB' 'MemAvailable: 9493328 kB' 'Buffers: 2436 kB' 'Cached: 3434956 kB' 'SwapCached: 0 kB' 'Active: 851248 kB' 'Inactive: 2709004 kB' 'Active(anon): 133328 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 124520 kB' 'Mapped: 49036 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165396 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79724 kB' 'KernelStack: 6676 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.820 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.820 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:39.821 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.821 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.821 02:50:18 -- setup/hugepages.sh@97 -- # anon=0 00:05:39.821 02:50:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:39.821 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.821 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.821 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.821 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.821 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.821 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.821 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.821 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.821 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.821 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6260508 kB' 'MemAvailable: 9493332 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 851212 kB' 'Inactive: 2709008 kB' 'Active(anon): 133292 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 124440 kB' 'Mapped: 48876 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165396 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79724 kB' 'KernelStack: 6688 kB' 'PageTables: 4332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.821 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.821 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.822 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.822 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.822 02:50:18 -- setup/hugepages.sh@99 -- # surp=0 00:05:39.822 02:50:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:39.822 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:39.822 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.822 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.822 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.822 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.822 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.822 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.822 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.822 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6260508 kB' 'MemAvailable: 9493332 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 850828 kB' 'Inactive: 2709008 kB' 'Active(anon): 132908 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 124280 kB' 'Mapped: 48876 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165396 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79724 kB' 'KernelStack: 6656 kB' 'PageTables: 4244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55028 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.822 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.822 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.823 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.823 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:39.823 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:39.823 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.823 02:50:18 -- setup/hugepages.sh@100 -- # resv=0 00:05:39.823 nr_hugepages=1024 00:05:39.823 02:50:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:39.823 resv_hugepages=0 00:05:39.823 02:50:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:39.823 surplus_hugepages=0 00:05:39.823 02:50:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:39.823 anon_hugepages=0 00:05:39.823 02:50:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:39.823 02:50:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.823 02:50:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:39.823 02:50:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:39.823 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:39.823 02:50:18 -- setup/common.sh@18 -- # local node= 00:05:39.823 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.823 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.824 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.824 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:39.824 02:50:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:39.824 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.824 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6260508 kB' 'MemAvailable: 9493332 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 850800 kB' 'Inactive: 2709008 kB' 'Active(anon): 132880 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 124028 kB' 'Mapped: 48876 kB' 'Shmem: 10468 kB' 'KReclaimable: 85672 kB' 'Slab: 165396 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79724 kB' 'KernelStack: 6688 kB' 'PageTables: 4340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355896 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55044 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.824 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.824 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:39.825 02:50:18 -- setup/common.sh@33 -- # echo 1024 00:05:39.825 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:39.825 02:50:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:39.825 02:50:18 -- setup/hugepages.sh@112 -- # get_nodes 00:05:39.825 02:50:18 -- setup/hugepages.sh@27 -- # local node 00:05:39.825 02:50:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:39.825 02:50:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:39.825 02:50:18 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:39.825 02:50:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:39.825 02:50:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:39.825 02:50:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:39.825 02:50:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:39.825 02:50:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:39.825 02:50:18 -- setup/common.sh@18 -- # local node=0 00:05:39.825 02:50:18 -- setup/common.sh@19 -- # local var val 00:05:39.825 02:50:18 -- setup/common.sh@20 -- # local mem_f mem 00:05:39.825 02:50:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:39.825 02:50:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:39.825 02:50:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:39.825 02:50:18 -- setup/common.sh@28 -- # mapfile -t mem 00:05:39.825 02:50:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6260508 kB' 'MemUsed: 5981472 kB' 'SwapCached: 0 kB' 'Active: 850744 kB' 'Inactive: 2709008 kB' 'Active(anon): 132824 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'FilePages: 3437396 kB' 'Mapped: 48876 kB' 'AnonPages: 123976 kB' 'Shmem: 10468 kB' 'KernelStack: 6672 kB' 'PageTables: 4292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85672 kB' 'Slab: 165396 kB' 'SReclaimable: 85672 kB' 'SUnreclaim: 79724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.825 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.825 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:39.826 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:39.826 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # continue 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.085 02:50:18 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.085 02:50:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.085 02:50:18 -- setup/common.sh@33 -- # echo 0 00:05:40.085 02:50:18 -- setup/common.sh@33 -- # return 0 00:05:40.085 02:50:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.085 02:50:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.085 02:50:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.085 02:50:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.085 node0=1024 expecting 1024 00:05:40.085 02:50:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.085 02:50:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.085 02:50:18 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:40.085 02:50:18 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:40.085 02:50:18 -- setup/hugepages.sh@202 -- # setup output 00:05:40.085 02:50:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:40.085 02:50:18 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:40.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:40.347 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.347 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:40.347 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:40.347 02:50:19 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:40.347 02:50:19 -- setup/hugepages.sh@89 -- # local node 00:05:40.347 02:50:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:40.347 02:50:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:40.348 02:50:19 -- setup/hugepages.sh@92 -- # local surp 00:05:40.348 02:50:19 -- setup/hugepages.sh@93 -- # local resv 00:05:40.348 02:50:19 -- setup/hugepages.sh@94 -- # local anon 00:05:40.348 02:50:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:40.348 02:50:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:40.348 02:50:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:40.348 02:50:19 -- setup/common.sh@18 -- # local node= 00:05:40.348 02:50:19 -- setup/common.sh@19 -- # local var val 00:05:40.348 02:50:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.348 02:50:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.348 02:50:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.348 02:50:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.348 02:50:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.348 02:50:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6254968 kB' 'MemAvailable: 9487788 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 846904 kB' 'Inactive: 2709008 kB' 'Active(anon): 128984 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 120076 kB' 'Mapped: 48324 kB' 'Shmem: 10468 kB' 'KReclaimable: 85664 kB' 'Slab: 165316 kB' 'SReclaimable: 85664 kB' 'SUnreclaim: 79652 kB' 'KernelStack: 6580 kB' 'PageTables: 4016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.348 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.348 02:50:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:40.349 02:50:19 -- setup/common.sh@33 -- # echo 0 00:05:40.349 02:50:19 -- setup/common.sh@33 -- # return 0 00:05:40.349 02:50:19 -- setup/hugepages.sh@97 -- # anon=0 00:05:40.349 02:50:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:40.349 02:50:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.349 02:50:19 -- setup/common.sh@18 -- # local node= 00:05:40.349 02:50:19 -- setup/common.sh@19 -- # local var val 00:05:40.349 02:50:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.349 02:50:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.349 02:50:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.349 02:50:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.349 02:50:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.349 02:50:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6254716 kB' 'MemAvailable: 9487536 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 846300 kB' 'Inactive: 2709008 kB' 'Active(anon): 128380 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 119484 kB' 'Mapped: 48136 kB' 'Shmem: 10468 kB' 'KReclaimable: 85664 kB' 'Slab: 165304 kB' 'SReclaimable: 85664 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6608 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.349 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.349 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.350 02:50:19 -- setup/common.sh@33 -- # echo 0 00:05:40.350 02:50:19 -- setup/common.sh@33 -- # return 0 00:05:40.350 02:50:19 -- setup/hugepages.sh@99 -- # surp=0 00:05:40.350 02:50:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:40.350 02:50:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:40.350 02:50:19 -- setup/common.sh@18 -- # local node= 00:05:40.350 02:50:19 -- setup/common.sh@19 -- # local var val 00:05:40.350 02:50:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.350 02:50:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.350 02:50:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.350 02:50:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.350 02:50:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.350 02:50:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6254716 kB' 'MemAvailable: 9487536 kB' 'Buffers: 2436 kB' 'Cached: 3434960 kB' 'SwapCached: 0 kB' 'Active: 845944 kB' 'Inactive: 2709008 kB' 'Active(anon): 128024 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 119064 kB' 'Mapped: 48136 kB' 'Shmem: 10468 kB' 'KReclaimable: 85664 kB' 'Slab: 165304 kB' 'SReclaimable: 85664 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6560 kB' 'PageTables: 3812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.350 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.350 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.351 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.351 02:50:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:40.352 02:50:19 -- setup/common.sh@33 -- # echo 0 00:05:40.352 02:50:19 -- setup/common.sh@33 -- # return 0 00:05:40.352 02:50:19 -- setup/hugepages.sh@100 -- # resv=0 00:05:40.352 nr_hugepages=1024 00:05:40.352 02:50:19 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:40.352 resv_hugepages=0 00:05:40.352 02:50:19 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:40.352 surplus_hugepages=0 00:05:40.352 02:50:19 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:40.352 anon_hugepages=0 00:05:40.352 02:50:19 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:40.352 02:50:19 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.352 02:50:19 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:40.352 02:50:19 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:40.352 02:50:19 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:40.352 02:50:19 -- setup/common.sh@18 -- # local node= 00:05:40.352 02:50:19 -- setup/common.sh@19 -- # local var val 00:05:40.352 02:50:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.352 02:50:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.352 02:50:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:40.352 02:50:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:40.352 02:50:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.352 02:50:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6254716 kB' 'MemAvailable: 9487532 kB' 'Buffers: 2436 kB' 'Cached: 3434956 kB' 'SwapCached: 0 kB' 'Active: 845976 kB' 'Inactive: 2709004 kB' 'Active(anon): 128056 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709004 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'AnonPages: 119240 kB' 'Mapped: 48136 kB' 'Shmem: 10468 kB' 'KReclaimable: 85664 kB' 'Slab: 165304 kB' 'SReclaimable: 85664 kB' 'SUnreclaim: 79640 kB' 'KernelStack: 6528 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54932 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 186220 kB' 'DirectMap2M: 6105088 kB' 'DirectMap1G: 8388608 kB' 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.352 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.352 02:50:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:40.353 02:50:19 -- setup/common.sh@33 -- # echo 1024 00:05:40.353 02:50:19 -- setup/common.sh@33 -- # return 0 00:05:40.353 02:50:19 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:40.353 02:50:19 -- setup/hugepages.sh@112 -- # get_nodes 00:05:40.353 02:50:19 -- setup/hugepages.sh@27 -- # local node 00:05:40.353 02:50:19 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:40.353 02:50:19 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:40.353 02:50:19 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:40.353 02:50:19 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:40.353 02:50:19 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:40.353 02:50:19 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:40.353 02:50:19 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:40.353 02:50:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:40.353 02:50:19 -- setup/common.sh@18 -- # local node=0 00:05:40.353 02:50:19 -- setup/common.sh@19 -- # local var val 00:05:40.353 02:50:19 -- setup/common.sh@20 -- # local mem_f mem 00:05:40.353 02:50:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:40.353 02:50:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:40.353 02:50:19 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:40.353 02:50:19 -- setup/common.sh@28 -- # mapfile -t mem 00:05:40.353 02:50:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 6254716 kB' 'MemUsed: 5987264 kB' 'SwapCached: 0 kB' 'Active: 846300 kB' 'Inactive: 2709008 kB' 'Active(anon): 128380 kB' 'Inactive(anon): 0 kB' 'Active(file): 717920 kB' 'Inactive(file): 2709008 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1328 kB' 'Writeback: 0 kB' 'FilePages: 3437396 kB' 'Mapped: 48136 kB' 'AnonPages: 119512 kB' 'Shmem: 10468 kB' 'KernelStack: 6528 kB' 'PageTables: 3704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 85664 kB' 'Slab: 165300 kB' 'SReclaimable: 85664 kB' 'SUnreclaim: 79636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.353 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.353 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.354 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.354 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # continue 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # IFS=': ' 00:05:40.613 02:50:19 -- setup/common.sh@31 -- # read -r var val _ 00:05:40.613 02:50:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:40.613 02:50:19 -- setup/common.sh@33 -- # echo 0 00:05:40.613 02:50:19 -- setup/common.sh@33 -- # return 0 00:05:40.613 02:50:19 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:40.613 02:50:19 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:40.613 02:50:19 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:40.613 02:50:19 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:40.613 node0=1024 expecting 1024 00:05:40.613 02:50:19 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:40.613 02:50:19 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:40.613 00:05:40.613 real 0m1.049s 00:05:40.613 user 0m0.523s 00:05:40.613 sys 0m0.569s 00:05:40.613 02:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.613 02:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.613 ************************************ 00:05:40.613 END TEST no_shrink_alloc 00:05:40.613 ************************************ 00:05:40.613 02:50:19 -- setup/hugepages.sh@217 -- # clear_hp 00:05:40.613 02:50:19 -- setup/hugepages.sh@37 -- # local node hp 00:05:40.613 02:50:19 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:40.613 02:50:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.613 02:50:19 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.613 02:50:19 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:40.613 02:50:19 -- setup/hugepages.sh@41 -- # echo 0 00:05:40.613 02:50:19 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:40.613 02:50:19 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:40.613 00:05:40.613 real 0m4.901s 00:05:40.613 user 0m2.324s 00:05:40.613 sys 0m2.563s 00:05:40.613 02:50:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:40.613 02:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.613 ************************************ 00:05:40.613 END TEST hugepages 00:05:40.613 ************************************ 00:05:40.614 02:50:19 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:40.614 02:50:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:40.614 02:50:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:40.614 02:50:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.614 ************************************ 00:05:40.614 START TEST driver 00:05:40.614 ************************************ 00:05:40.614 02:50:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:40.614 * Looking for test storage... 00:05:40.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:40.614 02:50:19 -- setup/driver.sh@68 -- # setup reset 00:05:40.614 02:50:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:40.614 02:50:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:41.181 02:50:20 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:41.181 02:50:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.181 02:50:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.181 02:50:20 -- common/autotest_common.sh@10 -- # set +x 00:05:41.440 ************************************ 00:05:41.440 START TEST guess_driver 00:05:41.440 ************************************ 00:05:41.440 02:50:20 -- common/autotest_common.sh@1111 -- # guess_driver 00:05:41.440 02:50:20 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:41.440 02:50:20 -- setup/driver.sh@47 -- # local fail=0 00:05:41.440 02:50:20 -- setup/driver.sh@49 -- # pick_driver 00:05:41.440 02:50:20 -- setup/driver.sh@36 -- # vfio 00:05:41.440 02:50:20 -- setup/driver.sh@21 -- # local iommu_grups 00:05:41.440 02:50:20 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:41.441 02:50:20 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:41.441 02:50:20 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:41.441 02:50:20 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:41.441 02:50:20 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:41.441 02:50:20 -- setup/driver.sh@32 -- # return 1 00:05:41.441 02:50:20 -- setup/driver.sh@38 -- # uio 00:05:41.441 02:50:20 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:41.441 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:41.441 02:50:20 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:41.441 Looking for driver=uio_pci_generic 00:05:41.441 02:50:20 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:41.441 02:50:20 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:41.441 02:50:20 -- setup/driver.sh@45 -- # setup output config 00:05:41.441 02:50:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:41.441 02:50:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:42.008 02:50:21 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:42.009 02:50:21 -- setup/driver.sh@58 -- # continue 00:05:42.009 02:50:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.009 02:50:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.009 02:50:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.009 02:50:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.267 02:50:21 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:42.267 02:50:21 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:42.267 02:50:21 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:42.267 02:50:21 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:42.267 02:50:21 -- setup/driver.sh@65 -- # setup reset 00:05:42.267 02:50:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:42.267 02:50:21 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:42.835 00:05:42.835 real 0m1.438s 00:05:42.835 user 0m0.542s 00:05:42.835 sys 0m0.878s 00:05:42.835 02:50:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.835 02:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:42.835 ************************************ 00:05:42.835 END TEST guess_driver 00:05:42.835 ************************************ 00:05:42.835 00:05:42.835 real 0m2.202s 00:05:42.835 user 0m0.818s 00:05:42.835 sys 0m1.399s 00:05:42.835 02:50:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:42.835 02:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:42.835 ************************************ 00:05:42.835 END TEST driver 00:05:42.835 ************************************ 00:05:42.835 02:50:21 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:42.835 02:50:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:42.835 02:50:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:42.835 02:50:21 -- common/autotest_common.sh@10 -- # set +x 00:05:42.835 ************************************ 00:05:42.835 START TEST devices 00:05:42.835 ************************************ 00:05:42.835 02:50:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:43.094 * Looking for test storage... 00:05:43.094 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:43.094 02:50:22 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:43.094 02:50:22 -- setup/devices.sh@192 -- # setup reset 00:05:43.094 02:50:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:43.094 02:50:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:43.664 02:50:22 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:43.664 02:50:22 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:05:43.664 02:50:22 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:05:43.664 02:50:22 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:05:43.664 02:50:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.664 02:50:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:05:43.664 02:50:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:05:43.664 02:50:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.664 02:50:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:05:43.664 02:50:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:05:43.664 02:50:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.664 02:50:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:05:43.664 02:50:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:05:43.664 02:50:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:05:43.664 02:50:22 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:05:43.664 02:50:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:05:43.664 02:50:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:43.664 02:50:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:05:43.664 02:50:22 -- setup/devices.sh@196 -- # blocks=() 00:05:43.664 02:50:22 -- setup/devices.sh@196 -- # declare -a blocks 00:05:43.664 02:50:22 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:43.664 02:50:22 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:43.664 02:50:22 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:43.664 02:50:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.664 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:43.664 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:43.664 02:50:22 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:43.664 02:50:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:43.664 02:50:22 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:43.664 02:50:22 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:43.664 02:50:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:43.932 No valid GPT data, bailing 00:05:43.932 02:50:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:43.932 02:50:22 -- scripts/common.sh@391 -- # pt= 00:05:43.932 02:50:22 -- scripts/common.sh@392 -- # return 1 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:43.932 02:50:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:43.932 02:50:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:43.932 02:50:22 -- setup/common.sh@80 -- # echo 4294967296 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.932 02:50:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.932 02:50:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:43.932 02:50:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.932 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:05:43.932 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:43.932 02:50:22 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:43.932 02:50:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:05:43.932 02:50:22 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:05:43.932 02:50:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:05:43.932 No valid GPT data, bailing 00:05:43.932 02:50:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:05:43.932 02:50:22 -- scripts/common.sh@391 -- # pt= 00:05:43.932 02:50:22 -- scripts/common.sh@392 -- # return 1 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:05:43.932 02:50:22 -- setup/common.sh@76 -- # local dev=nvme0n2 00:05:43.932 02:50:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:05:43.932 02:50:22 -- setup/common.sh@80 -- # echo 4294967296 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.932 02:50:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.932 02:50:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:43.932 02:50:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.932 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:05:43.932 02:50:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:43.932 02:50:22 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:05:43.932 02:50:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:05:43.932 02:50:22 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:05:43.932 02:50:22 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:05:43.932 02:50:22 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:05:43.932 No valid GPT data, bailing 00:05:43.932 02:50:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:05:43.932 02:50:23 -- scripts/common.sh@391 -- # pt= 00:05:43.932 02:50:23 -- scripts/common.sh@392 -- # return 1 00:05:43.932 02:50:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:05:43.932 02:50:23 -- setup/common.sh@76 -- # local dev=nvme0n3 00:05:43.932 02:50:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:05:43.932 02:50:23 -- setup/common.sh@80 -- # echo 4294967296 00:05:43.932 02:50:23 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:43.932 02:50:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:43.932 02:50:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:05:43.932 02:50:23 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:43.932 02:50:23 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:43.932 02:50:23 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:43.932 02:50:23 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:05:43.932 02:50:23 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:05:43.932 02:50:23 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:43.932 02:50:23 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:05:43.932 02:50:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:43.932 No valid GPT data, bailing 00:05:44.190 02:50:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:44.190 02:50:23 -- scripts/common.sh@391 -- # pt= 00:05:44.190 02:50:23 -- scripts/common.sh@392 -- # return 1 00:05:44.190 02:50:23 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:44.190 02:50:23 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:44.190 02:50:23 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:44.190 02:50:23 -- setup/common.sh@80 -- # echo 5368709120 00:05:44.190 02:50:23 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:44.190 02:50:23 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:44.190 02:50:23 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:05:44.190 02:50:23 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:44.190 02:50:23 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:44.190 02:50:23 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:44.190 02:50:23 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:44.190 02:50:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:44.190 02:50:23 -- common/autotest_common.sh@10 -- # set +x 00:05:44.190 ************************************ 00:05:44.190 START TEST nvme_mount 00:05:44.190 ************************************ 00:05:44.190 02:50:23 -- common/autotest_common.sh@1111 -- # nvme_mount 00:05:44.190 02:50:23 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:44.190 02:50:23 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:44.190 02:50:23 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:44.190 02:50:23 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:44.190 02:50:23 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:44.190 02:50:23 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:44.190 02:50:23 -- setup/common.sh@40 -- # local part_no=1 00:05:44.190 02:50:23 -- setup/common.sh@41 -- # local size=1073741824 00:05:44.190 02:50:23 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:44.190 02:50:23 -- setup/common.sh@44 -- # parts=() 00:05:44.190 02:50:23 -- setup/common.sh@44 -- # local parts 00:05:44.190 02:50:23 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:44.190 02:50:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.190 02:50:23 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:44.190 02:50:23 -- setup/common.sh@46 -- # (( part++ )) 00:05:44.190 02:50:23 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:44.190 02:50:23 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:44.190 02:50:23 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:44.190 02:50:23 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:45.126 Creating new GPT entries in memory. 00:05:45.126 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:45.126 other utilities. 00:05:45.126 02:50:24 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:45.126 02:50:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:45.126 02:50:24 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:45.126 02:50:24 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:45.126 02:50:24 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:46.502 Creating new GPT entries in memory. 00:05:46.502 The operation has completed successfully. 00:05:46.502 02:50:25 -- setup/common.sh@57 -- # (( part++ )) 00:05:46.503 02:50:25 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:46.503 02:50:25 -- setup/common.sh@62 -- # wait 70364 00:05:46.503 02:50:25 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.503 02:50:25 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:46.503 02:50:25 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.503 02:50:25 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:46.503 02:50:25 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:46.503 02:50:25 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.503 02:50:25 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.503 02:50:25 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:46.503 02:50:25 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:46.503 02:50:25 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.503 02:50:25 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.503 02:50:25 -- setup/devices.sh@53 -- # local found=0 00:05:46.503 02:50:25 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.503 02:50:25 -- setup/devices.sh@56 -- # : 00:05:46.503 02:50:25 -- setup/devices.sh@59 -- # local pci status 00:05:46.503 02:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.503 02:50:25 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:46.503 02:50:25 -- setup/devices.sh@47 -- # setup output config 00:05:46.503 02:50:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:46.503 02:50:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:46.503 02:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.503 02:50:25 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:46.503 02:50:25 -- setup/devices.sh@63 -- # found=1 00:05:46.503 02:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.503 02:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.503 02:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.761 02:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.761 02:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.761 02:50:25 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:46.761 02:50:25 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:46.761 02:50:25 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:46.761 02:50:25 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:46.761 02:50:25 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.761 02:50:25 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:46.761 02:50:25 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:46.761 02:50:25 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:46.761 02:50:25 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.761 02:50:25 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:46.761 02:50:25 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:46.761 02:50:25 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:46.761 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:46.761 02:50:25 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:46.761 02:50:25 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:47.020 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:47.020 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:47.020 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:47.020 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:47.020 02:50:26 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:47.020 02:50:26 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:47.020 02:50:26 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.020 02:50:26 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:47.020 02:50:26 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:47.020 02:50:26 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.020 02:50:26 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.020 02:50:26 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:47.020 02:50:26 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:47.020 02:50:26 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.020 02:50:26 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.020 02:50:26 -- setup/devices.sh@53 -- # local found=0 00:05:47.020 02:50:26 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.020 02:50:26 -- setup/devices.sh@56 -- # : 00:05:47.021 02:50:26 -- setup/devices.sh@59 -- # local pci status 00:05:47.021 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.021 02:50:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:47.021 02:50:26 -- setup/devices.sh@47 -- # setup output config 00:05:47.021 02:50:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.021 02:50:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.279 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.279 02:50:26 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:47.279 02:50:26 -- setup/devices.sh@63 -- # found=1 00:05:47.279 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.279 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.279 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.538 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.538 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.538 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.538 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.538 02:50:26 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:47.538 02:50:26 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:47.538 02:50:26 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.538 02:50:26 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:47.538 02:50:26 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:47.538 02:50:26 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:47.538 02:50:26 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:05:47.538 02:50:26 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:47.538 02:50:26 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:47.538 02:50:26 -- setup/devices.sh@50 -- # local mount_point= 00:05:47.538 02:50:26 -- setup/devices.sh@51 -- # local test_file= 00:05:47.538 02:50:26 -- setup/devices.sh@53 -- # local found=0 00:05:47.538 02:50:26 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:47.538 02:50:26 -- setup/devices.sh@59 -- # local pci status 00:05:47.538 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.538 02:50:26 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:47.538 02:50:26 -- setup/devices.sh@47 -- # setup output config 00:05:47.538 02:50:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:47.538 02:50:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:47.797 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.797 02:50:26 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:47.797 02:50:26 -- setup/devices.sh@63 -- # found=1 00:05:47.797 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:47.797 02:50:26 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:47.797 02:50:26 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.056 02:50:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.056 02:50:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.056 02:50:27 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:48.056 02:50:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:48.056 02:50:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:48.056 02:50:27 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:48.056 02:50:27 -- setup/devices.sh@68 -- # return 0 00:05:48.056 02:50:27 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:48.056 02:50:27 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:48.315 02:50:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:48.315 02:50:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:48.315 02:50:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:48.315 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:48.315 00:05:48.315 real 0m4.047s 00:05:48.315 user 0m0.712s 00:05:48.315 sys 0m1.050s 00:05:48.315 02:50:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:48.315 02:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 ************************************ 00:05:48.315 END TEST nvme_mount 00:05:48.315 ************************************ 00:05:48.315 02:50:27 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:48.315 02:50:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:48.315 02:50:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:48.315 02:50:27 -- common/autotest_common.sh@10 -- # set +x 00:05:48.315 ************************************ 00:05:48.315 START TEST dm_mount 00:05:48.315 ************************************ 00:05:48.315 02:50:27 -- common/autotest_common.sh@1111 -- # dm_mount 00:05:48.315 02:50:27 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:48.315 02:50:27 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:48.315 02:50:27 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:48.315 02:50:27 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:48.315 02:50:27 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:48.315 02:50:27 -- setup/common.sh@40 -- # local part_no=2 00:05:48.315 02:50:27 -- setup/common.sh@41 -- # local size=1073741824 00:05:48.315 02:50:27 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:48.315 02:50:27 -- setup/common.sh@44 -- # parts=() 00:05:48.315 02:50:27 -- setup/common.sh@44 -- # local parts 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.315 02:50:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.315 02:50:27 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part++ )) 00:05:48.315 02:50:27 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:48.315 02:50:27 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:48.315 02:50:27 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:48.315 02:50:27 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:49.252 Creating new GPT entries in memory. 00:05:49.252 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:49.252 other utilities. 00:05:49.252 02:50:28 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:49.252 02:50:28 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:49.252 02:50:28 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:49.252 02:50:28 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:49.252 02:50:28 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:50.631 Creating new GPT entries in memory. 00:05:50.631 The operation has completed successfully. 00:05:50.631 02:50:29 -- setup/common.sh@57 -- # (( part++ )) 00:05:50.631 02:50:29 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:50.631 02:50:29 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:50.631 02:50:29 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:50.631 02:50:29 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:51.568 The operation has completed successfully. 00:05:51.568 02:50:30 -- setup/common.sh@57 -- # (( part++ )) 00:05:51.568 02:50:30 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:51.568 02:50:30 -- setup/common.sh@62 -- # wait 70801 00:05:51.568 02:50:30 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:51.568 02:50:30 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.568 02:50:30 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.568 02:50:30 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:51.568 02:50:30 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:51.568 02:50:30 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.568 02:50:30 -- setup/devices.sh@161 -- # break 00:05:51.568 02:50:30 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.568 02:50:30 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:51.568 02:50:30 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:51.568 02:50:30 -- setup/devices.sh@166 -- # dm=dm-0 00:05:51.568 02:50:30 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:51.568 02:50:30 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:51.568 02:50:30 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.568 02:50:30 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:51.568 02:50:30 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.568 02:50:30 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:51.568 02:50:30 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:51.568 02:50:30 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.568 02:50:30 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.568 02:50:30 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:51.568 02:50:30 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:51.568 02:50:30 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:51.569 02:50:30 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:51.569 02:50:30 -- setup/devices.sh@53 -- # local found=0 00:05:51.569 02:50:30 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:51.569 02:50:30 -- setup/devices.sh@56 -- # : 00:05:51.569 02:50:30 -- setup/devices.sh@59 -- # local pci status 00:05:51.569 02:50:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.569 02:50:30 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:51.569 02:50:30 -- setup/devices.sh@47 -- # setup output config 00:05:51.569 02:50:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:51.569 02:50:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:51.827 02:50:30 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.827 02:50:30 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:51.827 02:50:30 -- setup/devices.sh@63 -- # found=1 00:05:51.827 02:50:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.827 02:50:30 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.827 02:50:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.827 02:50:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.827 02:50:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:51.827 02:50:30 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:51.827 02:50:30 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.086 02:50:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.086 02:50:31 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:52.086 02:50:31 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.086 02:50:31 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:52.086 02:50:31 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:52.086 02:50:31 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.086 02:50:31 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:52.086 02:50:31 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:05:52.086 02:50:31 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:52.086 02:50:31 -- setup/devices.sh@50 -- # local mount_point= 00:05:52.086 02:50:31 -- setup/devices.sh@51 -- # local test_file= 00:05:52.086 02:50:31 -- setup/devices.sh@53 -- # local found=0 00:05:52.086 02:50:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:52.086 02:50:31 -- setup/devices.sh@59 -- # local pci status 00:05:52.086 02:50:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.086 02:50:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:05:52.086 02:50:31 -- setup/devices.sh@47 -- # setup output config 00:05:52.086 02:50:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:52.086 02:50:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:52.345 02:50:31 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.345 02:50:31 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:52.345 02:50:31 -- setup/devices.sh@63 -- # found=1 00:05:52.345 02:50:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.345 02:50:31 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.345 02:50:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.345 02:50:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.345 02:50:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.345 02:50:31 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:05:52.345 02:50:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:52.604 02:50:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:52.604 02:50:31 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:52.604 02:50:31 -- setup/devices.sh@68 -- # return 0 00:05:52.604 02:50:31 -- setup/devices.sh@187 -- # cleanup_dm 00:05:52.604 02:50:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.604 02:50:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:52.604 02:50:31 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:52.604 02:50:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.604 02:50:31 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:52.604 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:52.604 02:50:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:52.604 02:50:31 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:52.604 00:05:52.604 real 0m4.271s 00:05:52.604 user 0m0.507s 00:05:52.604 sys 0m0.712s 00:05:52.604 02:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.604 ************************************ 00:05:52.604 02:50:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.604 END TEST dm_mount 00:05:52.604 ************************************ 00:05:52.604 02:50:31 -- setup/devices.sh@1 -- # cleanup 00:05:52.604 02:50:31 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:52.604 02:50:31 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:52.604 02:50:31 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.604 02:50:31 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:52.604 02:50:31 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.604 02:50:31 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:52.863 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:52.863 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:52.863 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:52.863 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:52.863 02:50:31 -- setup/devices.sh@12 -- # cleanup_dm 00:05:52.863 02:50:31 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:52.863 02:50:31 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:52.863 02:50:31 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:52.863 02:50:31 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:52.863 02:50:31 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:52.863 02:50:31 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:52.863 ************************************ 00:05:52.863 END TEST devices 00:05:52.863 ************************************ 00:05:52.863 00:05:52.863 real 0m9.970s 00:05:52.863 user 0m1.910s 00:05:52.863 sys 0m2.407s 00:05:52.863 02:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.863 02:50:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.863 00:05:52.863 real 0m22.258s 00:05:52.863 user 0m7.328s 00:05:52.863 sys 0m9.127s 00:05:52.863 02:50:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:52.863 02:50:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.863 ************************************ 00:05:52.863 END TEST setup.sh 00:05:52.863 ************************************ 00:05:53.122 02:50:32 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:53.690 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:53.690 Hugepages 00:05:53.690 node hugesize free / total 00:05:53.690 node0 1048576kB 0 / 0 00:05:53.690 node0 2048kB 2048 / 2048 00:05:53.690 00:05:53.690 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:53.690 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:53.690 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:05:53.949 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:05:53.949 02:50:32 -- spdk/autotest.sh@130 -- # uname -s 00:05:53.949 02:50:32 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:53.949 02:50:32 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:53.949 02:50:32 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:54.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:54.517 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:54.776 02:50:33 -- common/autotest_common.sh@1518 -- # sleep 1 00:05:55.731 02:50:34 -- common/autotest_common.sh@1519 -- # bdfs=() 00:05:55.731 02:50:34 -- common/autotest_common.sh@1519 -- # local bdfs 00:05:55.731 02:50:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:55.731 02:50:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:55.731 02:50:34 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:55.731 02:50:34 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:55.731 02:50:34 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:55.731 02:50:34 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:55.731 02:50:34 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:55.731 02:50:34 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:05:55.731 02:50:34 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:55.731 02:50:34 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:56.299 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:56.299 Waiting for block devices as requested 00:05:56.299 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:56.299 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:56.299 02:50:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:56.299 02:50:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:56.299 02:50:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:56.299 02:50:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:56.299 02:50:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1543 -- # continue 00:05:56.299 02:50:35 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:56.299 02:50:35 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:05:56.299 02:50:35 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:56.299 02:50:35 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:56.299 02:50:35 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:56.299 02:50:35 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:56.299 02:50:35 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:56.299 02:50:35 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:56.299 02:50:35 -- common/autotest_common.sh@1543 -- # continue 00:05:56.299 02:50:35 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:56.299 02:50:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:56.299 02:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.558 02:50:35 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:56.558 02:50:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:56.558 02:50:35 -- common/autotest_common.sh@10 -- # set +x 00:05:56.558 02:50:35 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:57.126 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:57.126 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.126 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:57.385 02:50:36 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:57.385 02:50:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:57.385 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.385 02:50:36 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:57.385 02:50:36 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:05:57.385 02:50:36 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:05:57.385 02:50:36 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:57.385 02:50:36 -- common/autotest_common.sh@1563 -- # local bdfs 00:05:57.385 02:50:36 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:05:57.385 02:50:36 -- common/autotest_common.sh@1499 -- # bdfs=() 00:05:57.385 02:50:36 -- common/autotest_common.sh@1499 -- # local bdfs 00:05:57.385 02:50:36 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:57.385 02:50:36 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:57.385 02:50:36 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:05:57.385 02:50:36 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:05:57.385 02:50:36 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:57.385 02:50:36 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:57.385 02:50:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:57.385 02:50:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:57.385 02:50:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:57.385 02:50:36 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:05:57.385 02:50:36 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:57.385 02:50:36 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:57.385 02:50:36 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:57.385 02:50:36 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:05:57.385 02:50:36 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:05:57.385 02:50:36 -- common/autotest_common.sh@1579 -- # return 0 00:05:57.385 02:50:36 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:57.385 02:50:36 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:57.385 02:50:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.385 02:50:36 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:57.385 02:50:36 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:57.385 02:50:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:57.385 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.385 02:50:36 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:57.385 02:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.385 02:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.385 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.385 ************************************ 00:05:57.385 START TEST env 00:05:57.385 ************************************ 00:05:57.385 02:50:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:57.645 * Looking for test storage... 00:05:57.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:57.645 02:50:36 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:57.645 02:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.645 02:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.645 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.645 ************************************ 00:05:57.645 START TEST env_memory 00:05:57.645 ************************************ 00:05:57.645 02:50:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:57.645 00:05:57.645 00:05:57.645 CUnit - A unit testing framework for C - Version 2.1-3 00:05:57.645 http://cunit.sourceforge.net/ 00:05:57.645 00:05:57.645 00:05:57.645 Suite: memory 00:05:57.645 Test: alloc and free memory map ...[2024-04-23 02:50:36.737636] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:57.645 passed 00:05:57.645 Test: mem map translation ...[2024-04-23 02:50:36.769765] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:57.645 [2024-04-23 02:50:36.770013] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:57.645 [2024-04-23 02:50:36.770235] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:57.645 [2024-04-23 02:50:36.770394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:57.904 passed 00:05:57.904 Test: mem map registration ...[2024-04-23 02:50:36.836408] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:57.904 [2024-04-23 02:50:36.836650] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:57.904 passed 00:05:57.905 Test: mem map adjacent registrations ...passed 00:05:57.905 00:05:57.905 Run Summary: Type Total Ran Passed Failed Inactive 00:05:57.905 suites 1 1 n/a 0 0 00:05:57.905 tests 4 4 4 0 0 00:05:57.905 asserts 152 152 152 0 n/a 00:05:57.905 00:05:57.905 Elapsed time = 0.219 seconds 00:05:57.905 00:05:57.905 real 0m0.232s 00:05:57.905 user 0m0.219s 00:05:57.905 sys 0m0.011s 00:05:57.905 02:50:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.905 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.905 ************************************ 00:05:57.905 END TEST env_memory 00:05:57.905 ************************************ 00:05:57.905 02:50:36 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:57.905 02:50:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.905 02:50:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.905 02:50:36 -- common/autotest_common.sh@10 -- # set +x 00:05:57.905 ************************************ 00:05:57.905 START TEST env_vtophys 00:05:57.905 ************************************ 00:05:57.905 02:50:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:58.164 EAL: lib.eal log level changed from notice to debug 00:05:58.164 EAL: Detected lcore 0 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 1 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 2 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 3 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 4 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 5 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 6 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 7 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 8 as core 0 on socket 0 00:05:58.164 EAL: Detected lcore 9 as core 0 on socket 0 00:05:58.164 EAL: Maximum logical cores by configuration: 128 00:05:58.164 EAL: Detected CPU lcores: 10 00:05:58.164 EAL: Detected NUMA nodes: 1 00:05:58.164 EAL: Checking presence of .so 'librte_eal.so.24.2' 00:05:58.164 EAL: Detected shared linkage of DPDK 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so.24.2 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so.24.2 00:05:58.164 EAL: Registered [vdev] bus. 00:05:58.164 EAL: bus.vdev log level changed from disabled to notice 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so.24.2 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so.24.2 00:05:58.164 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:58.164 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_pci.so 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_bus_vdev.so 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_mempool_ring.so 00:05:58.164 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.2/librte_net_i40e.so 00:05:58.164 EAL: No shared files mode enabled, IPC will be disabled 00:05:58.164 EAL: No shared files mode enabled, IPC is disabled 00:05:58.164 EAL: Selected IOVA mode 'PA' 00:05:58.164 EAL: Probing VFIO support... 00:05:58.164 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:58.164 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:58.164 EAL: Ask a virtual area of 0x2e000 bytes 00:05:58.164 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:58.164 EAL: Setting up physically contiguous memory... 00:05:58.164 EAL: Setting maximum number of open files to 524288 00:05:58.164 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:58.164 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:58.164 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.164 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:58.164 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.164 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.164 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:58.164 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:58.164 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.165 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:58.165 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.165 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.165 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:58.165 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:58.165 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.165 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:58.165 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.165 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.165 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:58.165 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:58.165 EAL: Ask a virtual area of 0x61000 bytes 00:05:58.165 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:58.165 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:58.165 EAL: Ask a virtual area of 0x400000000 bytes 00:05:58.165 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:58.165 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:58.165 EAL: Hugepages will be freed exactly as allocated. 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: TSC frequency is ~2200000 KHz 00:05:58.165 EAL: Main lcore 0 is ready (tid=7f6304e15a00;cpuset=[0]) 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 0 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 2MB 00:05:58.165 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:58.165 EAL: Mem event callback 'spdk:(nil)' registered 00:05:58.165 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:58.165 00:05:58.165 00:05:58.165 CUnit - A unit testing framework for C - Version 2.1-3 00:05:58.165 http://cunit.sourceforge.net/ 00:05:58.165 00:05:58.165 00:05:58.165 Suite: components_suite 00:05:58.165 Test: vtophys_malloc_test ...passed 00:05:58.165 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 4MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 4MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 6MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 6MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 10MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 10MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 18MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 18MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 34MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 34MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 66MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 66MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.165 EAL: Restoring previous memory policy: 4 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was expanded by 130MB 00:05:58.165 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.165 EAL: request: mp_malloc_sync 00:05:58.165 EAL: No shared files mode enabled, IPC is disabled 00:05:58.165 EAL: Heap on socket 0 was shrunk by 130MB 00:05:58.165 EAL: Trying to obtain current memory policy. 00:05:58.165 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.424 EAL: Restoring previous memory policy: 4 00:05:58.424 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.424 EAL: request: mp_malloc_sync 00:05:58.424 EAL: No shared files mode enabled, IPC is disabled 00:05:58.424 EAL: Heap on socket 0 was expanded by 258MB 00:05:58.424 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.424 EAL: request: mp_malloc_sync 00:05:58.424 EAL: No shared files mode enabled, IPC is disabled 00:05:58.424 EAL: Heap on socket 0 was shrunk by 258MB 00:05:58.424 EAL: Trying to obtain current memory policy. 00:05:58.424 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.424 EAL: Restoring previous memory policy: 4 00:05:58.424 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.424 EAL: request: mp_malloc_sync 00:05:58.424 EAL: No shared files mode enabled, IPC is disabled 00:05:58.424 EAL: Heap on socket 0 was expanded by 514MB 00:05:58.424 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.684 EAL: request: mp_malloc_sync 00:05:58.684 EAL: No shared files mode enabled, IPC is disabled 00:05:58.684 EAL: Heap on socket 0 was shrunk by 514MB 00:05:58.684 EAL: Trying to obtain current memory policy. 00:05:58.684 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:58.684 EAL: Restoring previous memory policy: 4 00:05:58.684 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.684 EAL: request: mp_malloc_sync 00:05:58.684 EAL: No shared files mode enabled, IPC is disabled 00:05:58.684 EAL: Heap on socket 0 was expanded by 1026MB 00:05:58.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.942 passed 00:05:58.942 00:05:58.942 Run Summary: Type Total Ran Passed Failed Inactive 00:05:58.942 suites 1 1 n/a 0 0 00:05:58.942 tests 2 2 2 0 0 00:05:58.942 asserts 5274 5274 5274 0 n/a 00:05:58.942 00:05:58.942 Elapsed time = 0.739 seconds 00:05:58.942 EAL: request: mp_malloc_sync 00:05:58.942 EAL: No shared files mode enabled, IPC is disabled 00:05:58.942 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:58.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:58.942 EAL: request: mp_malloc_sync 00:05:58.942 EAL: No shared files mode enabled, IPC is disabled 00:05:58.942 EAL: Heap on socket 0 was shrunk by 2MB 00:05:58.942 EAL: No shared files mode enabled, IPC is disabled 00:05:58.942 EAL: No shared files mode enabled, IPC is disabled 00:05:58.942 EAL: No shared files mode enabled, IPC is disabled 00:05:58.942 00:05:58.942 real 0m0.935s 00:05:58.942 user 0m0.479s 00:05:58.942 sys 0m0.322s 00:05:58.942 02:50:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:58.942 02:50:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.942 ************************************ 00:05:58.942 END TEST env_vtophys 00:05:58.942 ************************************ 00:05:58.942 02:50:38 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:58.942 02:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:58.943 02:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:58.943 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:58.943 ************************************ 00:05:58.943 START TEST env_pci 00:05:58.943 ************************************ 00:05:58.943 02:50:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:59.201 00:05:59.201 00:05:59.201 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.201 http://cunit.sourceforge.net/ 00:05:59.201 00:05:59.201 00:05:59.201 Suite: pci 00:05:59.201 Test: pci_hook ...[2024-04-23 02:50:38.109923] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 72009 has claimed it 00:05:59.201 passed 00:05:59.201 00:05:59.202 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.202 suites 1 1 n/a 0 0 00:05:59.202 tests 1 1 1 0 0 00:05:59.202 asserts 25 25 25 0 n/a 00:05:59.202 00:05:59.202 Elapsed time = 0.002 seconds 00:05:59.202 EAL: Cannot find device (10000:00:01.0) 00:05:59.202 EAL: Failed to attach device on primary process 00:05:59.202 00:05:59.202 real 0m0.019s 00:05:59.202 user 0m0.010s 00:05:59.202 sys 0m0.009s 00:05:59.202 02:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.202 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.202 ************************************ 00:05:59.202 END TEST env_pci 00:05:59.202 ************************************ 00:05:59.202 02:50:38 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:59.202 02:50:38 -- env/env.sh@15 -- # uname 00:05:59.202 02:50:38 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:59.202 02:50:38 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:59.202 02:50:38 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.202 02:50:38 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:05:59.202 02:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.202 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.202 ************************************ 00:05:59.202 START TEST env_dpdk_post_init 00:05:59.202 ************************************ 00:05:59.202 02:50:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:59.202 EAL: Detected CPU lcores: 10 00:05:59.202 EAL: Detected NUMA nodes: 1 00:05:59.202 EAL: Detected shared linkage of DPDK 00:05:59.202 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.202 EAL: Selected IOVA mode 'PA' 00:05:59.464 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:59.464 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:59.464 Starting DPDK initialization... 00:05:59.464 Starting SPDK post initialization... 00:05:59.464 SPDK NVMe probe 00:05:59.464 Attaching to 0000:00:10.0 00:05:59.464 Attaching to 0000:00:11.0 00:05:59.464 Attached to 0000:00:10.0 00:05:59.464 Attached to 0000:00:11.0 00:05:59.464 Cleaning up... 00:05:59.464 00:05:59.464 real 0m0.183s 00:05:59.464 user 0m0.054s 00:05:59.464 sys 0m0.030s 00:05:59.464 02:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.464 ************************************ 00:05:59.464 END TEST env_dpdk_post_init 00:05:59.464 ************************************ 00:05:59.464 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.464 02:50:38 -- env/env.sh@26 -- # uname 00:05:59.464 02:50:38 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:59.464 02:50:38 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.464 02:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.464 02:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.464 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.464 ************************************ 00:05:59.464 START TEST env_mem_callbacks 00:05:59.464 ************************************ 00:05:59.464 02:50:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:59.464 EAL: Detected CPU lcores: 10 00:05:59.464 EAL: Detected NUMA nodes: 1 00:05:59.464 EAL: Detected shared linkage of DPDK 00:05:59.464 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:59.464 EAL: Selected IOVA mode 'PA' 00:05:59.729 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:59.729 00:05:59.729 00:05:59.729 CUnit - A unit testing framework for C - Version 2.1-3 00:05:59.729 http://cunit.sourceforge.net/ 00:05:59.729 00:05:59.729 00:05:59.729 Suite: memory 00:05:59.729 Test: test ... 00:05:59.729 register 0x200000200000 2097152 00:05:59.729 malloc 3145728 00:05:59.729 register 0x200000400000 4194304 00:05:59.729 buf 0x200000500000 len 3145728 PASSED 00:05:59.729 malloc 64 00:05:59.729 buf 0x2000004fff40 len 64 PASSED 00:05:59.729 malloc 4194304 00:05:59.729 register 0x200000800000 6291456 00:05:59.729 buf 0x200000a00000 len 4194304 PASSED 00:05:59.729 free 0x200000500000 3145728 00:05:59.729 free 0x2000004fff40 64 00:05:59.729 unregister 0x200000400000 4194304 PASSED 00:05:59.729 free 0x200000a00000 4194304 00:05:59.729 unregister 0x200000800000 6291456 PASSED 00:05:59.729 malloc 8388608 00:05:59.729 register 0x200000400000 10485760 00:05:59.729 buf 0x200000600000 len 8388608 PASSED 00:05:59.729 free 0x200000600000 8388608 00:05:59.729 unregister 0x200000400000 10485760 PASSED 00:05:59.729 passed 00:05:59.729 00:05:59.729 Run Summary: Type Total Ran Passed Failed Inactive 00:05:59.729 suites 1 1 n/a 0 0 00:05:59.729 tests 1 1 1 0 0 00:05:59.729 asserts 15 15 15 0 n/a 00:05:59.729 00:05:59.729 Elapsed time = 0.007 seconds 00:05:59.729 00:05:59.729 real 0m0.141s 00:05:59.729 user 0m0.019s 00:05:59.729 sys 0m0.021s 00:05:59.729 02:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.729 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.729 ************************************ 00:05:59.729 END TEST env_mem_callbacks 00:05:59.729 ************************************ 00:05:59.729 00:05:59.729 real 0m2.186s 00:05:59.729 user 0m1.032s 00:05:59.729 sys 0m0.727s 00:05:59.729 02:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:59.729 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.729 ************************************ 00:05:59.729 END TEST env 00:05:59.729 ************************************ 00:05:59.729 02:50:38 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.729 02:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.729 02:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.729 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.729 ************************************ 00:05:59.729 START TEST rpc 00:05:59.729 ************************************ 00:05:59.729 02:50:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:59.989 * Looking for test storage... 00:05:59.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:59.989 02:50:38 -- rpc/rpc.sh@65 -- # spdk_pid=72138 00:05:59.989 02:50:38 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.989 02:50:38 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:59.989 02:50:38 -- rpc/rpc.sh@67 -- # waitforlisten 72138 00:05:59.989 02:50:38 -- common/autotest_common.sh@817 -- # '[' -z 72138 ']' 00:05:59.989 02:50:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.989 02:50:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:59.989 02:50:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.989 02:50:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:59.989 02:50:38 -- common/autotest_common.sh@10 -- # set +x 00:05:59.989 [2024-04-23 02:50:39.003079] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:05:59.989 [2024-04-23 02:50:39.003220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72138 ] 00:05:59.989 [2024-04-23 02:50:39.137685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:00.254 [2024-04-23 02:50:39.153994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.254 [2024-04-23 02:50:39.195638] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:00.254 [2024-04-23 02:50:39.195696] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 72138' to capture a snapshot of events at runtime. 00:06:00.254 [2024-04-23 02:50:39.195711] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:00.254 [2024-04-23 02:50:39.195721] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:00.254 [2024-04-23 02:50:39.195730] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid72138 for offline analysis/debug. 00:06:00.254 [2024-04-23 02:50:39.195776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.828 02:50:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.828 02:50:39 -- common/autotest_common.sh@850 -- # return 0 00:06:00.828 02:50:39 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.828 02:50:39 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:00.828 02:50:39 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:00.828 02:50:39 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:00.828 02:50:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.828 02:50:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.828 02:50:39 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 ************************************ 00:06:01.098 START TEST rpc_integrity 00:06:01.098 ************************************ 00:06:01.098 02:50:40 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:01.098 02:50:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:01.098 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.098 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.098 02:50:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:01.098 02:50:40 -- rpc/rpc.sh@13 -- # jq length 00:06:01.098 02:50:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:01.098 02:50:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:01.098 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.098 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.098 02:50:40 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:01.098 02:50:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:01.098 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.098 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.098 02:50:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:01.098 { 00:06:01.098 "name": "Malloc0", 00:06:01.098 "aliases": [ 00:06:01.098 "0ee67a3a-7171-4f38-8ea1-cb97ab4dae36" 00:06:01.098 ], 00:06:01.098 "product_name": "Malloc disk", 00:06:01.098 "block_size": 512, 00:06:01.098 "num_blocks": 16384, 00:06:01.098 "uuid": "0ee67a3a-7171-4f38-8ea1-cb97ab4dae36", 00:06:01.098 "assigned_rate_limits": { 00:06:01.098 "rw_ios_per_sec": 0, 00:06:01.098 "rw_mbytes_per_sec": 0, 00:06:01.098 "r_mbytes_per_sec": 0, 00:06:01.098 "w_mbytes_per_sec": 0 00:06:01.098 }, 00:06:01.098 "claimed": false, 00:06:01.098 "zoned": false, 00:06:01.098 "supported_io_types": { 00:06:01.098 "read": true, 00:06:01.098 "write": true, 00:06:01.098 "unmap": true, 00:06:01.098 "write_zeroes": true, 00:06:01.098 "flush": true, 00:06:01.098 "reset": true, 00:06:01.098 "compare": false, 00:06:01.098 "compare_and_write": false, 00:06:01.098 "abort": true, 00:06:01.098 "nvme_admin": false, 00:06:01.098 "nvme_io": false 00:06:01.098 }, 00:06:01.098 "memory_domains": [ 00:06:01.098 { 00:06:01.098 "dma_device_id": "system", 00:06:01.098 "dma_device_type": 1 00:06:01.098 }, 00:06:01.098 { 00:06:01.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.098 "dma_device_type": 2 00:06:01.098 } 00:06:01.098 ], 00:06:01.098 "driver_specific": {} 00:06:01.098 } 00:06:01.098 ]' 00:06:01.098 02:50:40 -- rpc/rpc.sh@17 -- # jq length 00:06:01.098 02:50:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:01.098 02:50:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:01.098 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.098 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 [2024-04-23 02:50:40.203084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:01.098 [2024-04-23 02:50:40.203139] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.098 [2024-04-23 02:50:40.203158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1342d50 00:06:01.098 [2024-04-23 02:50:40.203167] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.098 [2024-04-23 02:50:40.204733] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.098 [2024-04-23 02:50:40.204793] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:01.098 Passthru0 00:06:01.098 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.098 02:50:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:01.098 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.098 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.098 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.098 02:50:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:01.098 { 00:06:01.098 "name": "Malloc0", 00:06:01.098 "aliases": [ 00:06:01.098 "0ee67a3a-7171-4f38-8ea1-cb97ab4dae36" 00:06:01.098 ], 00:06:01.098 "product_name": "Malloc disk", 00:06:01.098 "block_size": 512, 00:06:01.098 "num_blocks": 16384, 00:06:01.098 "uuid": "0ee67a3a-7171-4f38-8ea1-cb97ab4dae36", 00:06:01.098 "assigned_rate_limits": { 00:06:01.098 "rw_ios_per_sec": 0, 00:06:01.098 "rw_mbytes_per_sec": 0, 00:06:01.098 "r_mbytes_per_sec": 0, 00:06:01.098 "w_mbytes_per_sec": 0 00:06:01.098 }, 00:06:01.098 "claimed": true, 00:06:01.098 "claim_type": "exclusive_write", 00:06:01.098 "zoned": false, 00:06:01.098 "supported_io_types": { 00:06:01.098 "read": true, 00:06:01.098 "write": true, 00:06:01.098 "unmap": true, 00:06:01.098 "write_zeroes": true, 00:06:01.098 "flush": true, 00:06:01.098 "reset": true, 00:06:01.098 "compare": false, 00:06:01.098 "compare_and_write": false, 00:06:01.098 "abort": true, 00:06:01.098 "nvme_admin": false, 00:06:01.098 "nvme_io": false 00:06:01.098 }, 00:06:01.098 "memory_domains": [ 00:06:01.098 { 00:06:01.098 "dma_device_id": "system", 00:06:01.098 "dma_device_type": 1 00:06:01.098 }, 00:06:01.098 { 00:06:01.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.098 "dma_device_type": 2 00:06:01.098 } 00:06:01.098 ], 00:06:01.098 "driver_specific": {} 00:06:01.098 }, 00:06:01.098 { 00:06:01.098 "name": "Passthru0", 00:06:01.098 "aliases": [ 00:06:01.098 "c913931b-1da9-5c6a-8fee-a99a8bc59949" 00:06:01.098 ], 00:06:01.098 "product_name": "passthru", 00:06:01.098 "block_size": 512, 00:06:01.098 "num_blocks": 16384, 00:06:01.098 "uuid": "c913931b-1da9-5c6a-8fee-a99a8bc59949", 00:06:01.098 "assigned_rate_limits": { 00:06:01.098 "rw_ios_per_sec": 0, 00:06:01.098 "rw_mbytes_per_sec": 0, 00:06:01.098 "r_mbytes_per_sec": 0, 00:06:01.098 "w_mbytes_per_sec": 0 00:06:01.098 }, 00:06:01.098 "claimed": false, 00:06:01.098 "zoned": false, 00:06:01.098 "supported_io_types": { 00:06:01.098 "read": true, 00:06:01.098 "write": true, 00:06:01.098 "unmap": true, 00:06:01.098 "write_zeroes": true, 00:06:01.098 "flush": true, 00:06:01.098 "reset": true, 00:06:01.098 "compare": false, 00:06:01.098 "compare_and_write": false, 00:06:01.098 "abort": true, 00:06:01.098 "nvme_admin": false, 00:06:01.098 "nvme_io": false 00:06:01.098 }, 00:06:01.098 "memory_domains": [ 00:06:01.098 { 00:06:01.098 "dma_device_id": "system", 00:06:01.098 "dma_device_type": 1 00:06:01.098 }, 00:06:01.098 { 00:06:01.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.098 "dma_device_type": 2 00:06:01.098 } 00:06:01.098 ], 00:06:01.098 "driver_specific": { 00:06:01.098 "passthru": { 00:06:01.098 "name": "Passthru0", 00:06:01.098 "base_bdev_name": "Malloc0" 00:06:01.098 } 00:06:01.098 } 00:06:01.098 } 00:06:01.098 ]' 00:06:01.098 02:50:40 -- rpc/rpc.sh@21 -- # jq length 00:06:01.357 02:50:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:01.357 02:50:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:01.357 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.357 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.357 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.357 02:50:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:01.357 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.357 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.357 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.357 02:50:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:01.357 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.357 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.357 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.357 02:50:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:01.357 02:50:40 -- rpc/rpc.sh@26 -- # jq length 00:06:01.358 02:50:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:01.358 00:06:01.358 real 0m0.321s 00:06:01.358 user 0m0.214s 00:06:01.358 sys 0m0.039s 00:06:01.358 02:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.358 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.358 ************************************ 00:06:01.358 END TEST rpc_integrity 00:06:01.358 ************************************ 00:06:01.358 02:50:40 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:01.358 02:50:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.358 02:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.358 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.358 ************************************ 00:06:01.358 START TEST rpc_plugins 00:06:01.358 ************************************ 00:06:01.358 02:50:40 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:06:01.358 02:50:40 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:01.358 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.358 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.617 02:50:40 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:01.617 02:50:40 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:01.617 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.617 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.617 02:50:40 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:01.617 { 00:06:01.617 "name": "Malloc1", 00:06:01.617 "aliases": [ 00:06:01.617 "a5bcb409-bc7c-4e8f-8664-520d38738ae8" 00:06:01.617 ], 00:06:01.617 "product_name": "Malloc disk", 00:06:01.617 "block_size": 4096, 00:06:01.617 "num_blocks": 256, 00:06:01.617 "uuid": "a5bcb409-bc7c-4e8f-8664-520d38738ae8", 00:06:01.617 "assigned_rate_limits": { 00:06:01.617 "rw_ios_per_sec": 0, 00:06:01.617 "rw_mbytes_per_sec": 0, 00:06:01.617 "r_mbytes_per_sec": 0, 00:06:01.617 "w_mbytes_per_sec": 0 00:06:01.617 }, 00:06:01.617 "claimed": false, 00:06:01.617 "zoned": false, 00:06:01.617 "supported_io_types": { 00:06:01.617 "read": true, 00:06:01.617 "write": true, 00:06:01.617 "unmap": true, 00:06:01.617 "write_zeroes": true, 00:06:01.617 "flush": true, 00:06:01.617 "reset": true, 00:06:01.617 "compare": false, 00:06:01.617 "compare_and_write": false, 00:06:01.617 "abort": true, 00:06:01.617 "nvme_admin": false, 00:06:01.617 "nvme_io": false 00:06:01.617 }, 00:06:01.617 "memory_domains": [ 00:06:01.617 { 00:06:01.617 "dma_device_id": "system", 00:06:01.617 "dma_device_type": 1 00:06:01.617 }, 00:06:01.617 { 00:06:01.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:01.617 "dma_device_type": 2 00:06:01.617 } 00:06:01.617 ], 00:06:01.617 "driver_specific": {} 00:06:01.617 } 00:06:01.617 ]' 00:06:01.617 02:50:40 -- rpc/rpc.sh@32 -- # jq length 00:06:01.617 02:50:40 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:01.617 02:50:40 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:01.617 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.617 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.617 02:50:40 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:01.617 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.617 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.617 02:50:40 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:01.617 02:50:40 -- rpc/rpc.sh@36 -- # jq length 00:06:01.617 02:50:40 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:01.617 00:06:01.617 real 0m0.160s 00:06:01.617 user 0m0.104s 00:06:01.617 sys 0m0.018s 00:06:01.617 02:50:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:01.617 ************************************ 00:06:01.617 END TEST rpc_plugins 00:06:01.617 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.617 ************************************ 00:06:01.617 02:50:40 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:01.617 02:50:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.617 02:50:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.617 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.875 ************************************ 00:06:01.875 START TEST rpc_trace_cmd_test 00:06:01.875 ************************************ 00:06:01.875 02:50:40 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:06:01.875 02:50:40 -- rpc/rpc.sh@40 -- # local info 00:06:01.875 02:50:40 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:01.875 02:50:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:01.875 02:50:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.875 02:50:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:01.875 02:50:40 -- rpc/rpc.sh@42 -- # info='{ 00:06:01.875 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid72138", 00:06:01.875 "tpoint_group_mask": "0x8", 00:06:01.875 "iscsi_conn": { 00:06:01.875 "mask": "0x2", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "scsi": { 00:06:01.875 "mask": "0x4", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "bdev": { 00:06:01.875 "mask": "0x8", 00:06:01.875 "tpoint_mask": "0xffffffffffffffff" 00:06:01.875 }, 00:06:01.875 "nvmf_rdma": { 00:06:01.875 "mask": "0x10", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "nvmf_tcp": { 00:06:01.875 "mask": "0x20", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "ftl": { 00:06:01.875 "mask": "0x40", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "blobfs": { 00:06:01.875 "mask": "0x80", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "dsa": { 00:06:01.875 "mask": "0x200", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "thread": { 00:06:01.875 "mask": "0x400", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "nvme_pcie": { 00:06:01.875 "mask": "0x800", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "iaa": { 00:06:01.875 "mask": "0x1000", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "nvme_tcp": { 00:06:01.875 "mask": "0x2000", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "bdev_nvme": { 00:06:01.875 "mask": "0x4000", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 }, 00:06:01.875 "sock": { 00:06:01.875 "mask": "0x8000", 00:06:01.875 "tpoint_mask": "0x0" 00:06:01.875 } 00:06:01.875 }' 00:06:01.875 02:50:40 -- rpc/rpc.sh@43 -- # jq length 00:06:01.875 02:50:40 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:01.875 02:50:40 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:01.875 02:50:40 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:01.875 02:50:40 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:01.875 02:50:40 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:01.875 02:50:40 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:01.875 02:50:40 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:01.875 02:50:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:02.135 02:50:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:02.135 00:06:02.135 real 0m0.274s 00:06:02.135 user 0m0.239s 00:06:02.135 sys 0m0.027s 00:06:02.135 02:50:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.135 ************************************ 00:06:02.135 END TEST rpc_trace_cmd_test 00:06:02.135 ************************************ 00:06:02.135 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 02:50:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:02.135 02:50:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:02.135 02:50:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:02.135 02:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.135 02:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.135 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 ************************************ 00:06:02.135 START TEST rpc_daemon_integrity 00:06:02.135 ************************************ 00:06:02.135 02:50:41 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:06:02.135 02:50:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:02.135 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.135 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.135 02:50:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:02.135 02:50:41 -- rpc/rpc.sh@13 -- # jq length 00:06:02.135 02:50:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:02.135 02:50:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:02.135 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.135 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.135 02:50:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:02.135 02:50:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:02.135 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.135 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.135 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.135 02:50:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:02.135 { 00:06:02.135 "name": "Malloc2", 00:06:02.135 "aliases": [ 00:06:02.135 "7f11369b-482f-4320-ab47-7f0f4337b4cf" 00:06:02.135 ], 00:06:02.135 "product_name": "Malloc disk", 00:06:02.135 "block_size": 512, 00:06:02.135 "num_blocks": 16384, 00:06:02.135 "uuid": "7f11369b-482f-4320-ab47-7f0f4337b4cf", 00:06:02.135 "assigned_rate_limits": { 00:06:02.135 "rw_ios_per_sec": 0, 00:06:02.135 "rw_mbytes_per_sec": 0, 00:06:02.135 "r_mbytes_per_sec": 0, 00:06:02.135 "w_mbytes_per_sec": 0 00:06:02.135 }, 00:06:02.135 "claimed": false, 00:06:02.135 "zoned": false, 00:06:02.135 "supported_io_types": { 00:06:02.135 "read": true, 00:06:02.135 "write": true, 00:06:02.135 "unmap": true, 00:06:02.135 "write_zeroes": true, 00:06:02.135 "flush": true, 00:06:02.135 "reset": true, 00:06:02.135 "compare": false, 00:06:02.135 "compare_and_write": false, 00:06:02.135 "abort": true, 00:06:02.135 "nvme_admin": false, 00:06:02.135 "nvme_io": false 00:06:02.135 }, 00:06:02.135 "memory_domains": [ 00:06:02.135 { 00:06:02.135 "dma_device_id": "system", 00:06:02.135 "dma_device_type": 1 00:06:02.135 }, 00:06:02.135 { 00:06:02.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.135 "dma_device_type": 2 00:06:02.135 } 00:06:02.135 ], 00:06:02.135 "driver_specific": {} 00:06:02.135 } 00:06:02.135 ]' 00:06:02.135 02:50:41 -- rpc/rpc.sh@17 -- # jq length 00:06:02.394 02:50:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:02.394 02:50:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:02.394 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.394 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.394 [2024-04-23 02:50:41.315680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:02.394 [2024-04-23 02:50:41.315740] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:02.394 [2024-04-23 02:50:41.315759] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x134e8f0 00:06:02.395 [2024-04-23 02:50:41.315769] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:02.395 [2024-04-23 02:50:41.317098] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:02.395 [2024-04-23 02:50:41.317141] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:02.395 Passthru0 00:06:02.395 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.395 02:50:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:02.395 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.395 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.395 02:50:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:02.395 { 00:06:02.395 "name": "Malloc2", 00:06:02.395 "aliases": [ 00:06:02.395 "7f11369b-482f-4320-ab47-7f0f4337b4cf" 00:06:02.395 ], 00:06:02.395 "product_name": "Malloc disk", 00:06:02.395 "block_size": 512, 00:06:02.395 "num_blocks": 16384, 00:06:02.395 "uuid": "7f11369b-482f-4320-ab47-7f0f4337b4cf", 00:06:02.395 "assigned_rate_limits": { 00:06:02.395 "rw_ios_per_sec": 0, 00:06:02.395 "rw_mbytes_per_sec": 0, 00:06:02.395 "r_mbytes_per_sec": 0, 00:06:02.395 "w_mbytes_per_sec": 0 00:06:02.395 }, 00:06:02.395 "claimed": true, 00:06:02.395 "claim_type": "exclusive_write", 00:06:02.395 "zoned": false, 00:06:02.395 "supported_io_types": { 00:06:02.395 "read": true, 00:06:02.395 "write": true, 00:06:02.395 "unmap": true, 00:06:02.395 "write_zeroes": true, 00:06:02.395 "flush": true, 00:06:02.395 "reset": true, 00:06:02.395 "compare": false, 00:06:02.395 "compare_and_write": false, 00:06:02.395 "abort": true, 00:06:02.395 "nvme_admin": false, 00:06:02.395 "nvme_io": false 00:06:02.395 }, 00:06:02.395 "memory_domains": [ 00:06:02.395 { 00:06:02.395 "dma_device_id": "system", 00:06:02.395 "dma_device_type": 1 00:06:02.395 }, 00:06:02.395 { 00:06:02.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.395 "dma_device_type": 2 00:06:02.395 } 00:06:02.395 ], 00:06:02.395 "driver_specific": {} 00:06:02.395 }, 00:06:02.395 { 00:06:02.395 "name": "Passthru0", 00:06:02.395 "aliases": [ 00:06:02.395 "7fc922e2-f097-555b-b76d-4465381ab3fc" 00:06:02.395 ], 00:06:02.395 "product_name": "passthru", 00:06:02.395 "block_size": 512, 00:06:02.395 "num_blocks": 16384, 00:06:02.395 "uuid": "7fc922e2-f097-555b-b76d-4465381ab3fc", 00:06:02.395 "assigned_rate_limits": { 00:06:02.395 "rw_ios_per_sec": 0, 00:06:02.395 "rw_mbytes_per_sec": 0, 00:06:02.395 "r_mbytes_per_sec": 0, 00:06:02.395 "w_mbytes_per_sec": 0 00:06:02.395 }, 00:06:02.395 "claimed": false, 00:06:02.395 "zoned": false, 00:06:02.395 "supported_io_types": { 00:06:02.395 "read": true, 00:06:02.395 "write": true, 00:06:02.395 "unmap": true, 00:06:02.395 "write_zeroes": true, 00:06:02.395 "flush": true, 00:06:02.395 "reset": true, 00:06:02.395 "compare": false, 00:06:02.395 "compare_and_write": false, 00:06:02.395 "abort": true, 00:06:02.395 "nvme_admin": false, 00:06:02.395 "nvme_io": false 00:06:02.395 }, 00:06:02.395 "memory_domains": [ 00:06:02.395 { 00:06:02.395 "dma_device_id": "system", 00:06:02.395 "dma_device_type": 1 00:06:02.395 }, 00:06:02.395 { 00:06:02.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:02.395 "dma_device_type": 2 00:06:02.395 } 00:06:02.395 ], 00:06:02.395 "driver_specific": { 00:06:02.395 "passthru": { 00:06:02.395 "name": "Passthru0", 00:06:02.395 "base_bdev_name": "Malloc2" 00:06:02.395 } 00:06:02.395 } 00:06:02.395 } 00:06:02.395 ]' 00:06:02.395 02:50:41 -- rpc/rpc.sh@21 -- # jq length 00:06:02.395 02:50:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:02.395 02:50:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:02.395 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.395 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.395 02:50:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:02.395 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.395 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.395 02:50:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:02.395 02:50:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.395 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 02:50:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.395 02:50:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:02.395 02:50:41 -- rpc/rpc.sh@26 -- # jq length 00:06:02.395 02:50:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:02.395 00:06:02.395 real 0m0.317s 00:06:02.395 user 0m0.213s 00:06:02.395 sys 0m0.033s 00:06:02.395 02:50:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.395 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.395 ************************************ 00:06:02.395 END TEST rpc_daemon_integrity 00:06:02.395 ************************************ 00:06:02.395 02:50:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:02.395 02:50:41 -- rpc/rpc.sh@84 -- # killprocess 72138 00:06:02.395 02:50:41 -- common/autotest_common.sh@936 -- # '[' -z 72138 ']' 00:06:02.395 02:50:41 -- common/autotest_common.sh@940 -- # kill -0 72138 00:06:02.395 02:50:41 -- common/autotest_common.sh@941 -- # uname 00:06:02.395 02:50:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.395 02:50:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72138 00:06:02.654 02:50:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.654 02:50:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.654 killing process with pid 72138 00:06:02.654 02:50:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72138' 00:06:02.654 02:50:41 -- common/autotest_common.sh@955 -- # kill 72138 00:06:02.654 02:50:41 -- common/autotest_common.sh@960 -- # wait 72138 00:06:02.654 00:06:02.654 real 0m2.935s 00:06:02.654 user 0m3.999s 00:06:02.654 sys 0m0.689s 00:06:02.654 02:50:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.654 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.654 ************************************ 00:06:02.654 END TEST rpc 00:06:02.654 ************************************ 00:06:02.654 02:50:41 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:02.654 02:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.654 02:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.654 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.913 ************************************ 00:06:02.913 START TEST skip_rpc 00:06:02.913 ************************************ 00:06:02.913 02:50:41 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:02.913 * Looking for test storage... 00:06:02.913 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:02.913 02:50:41 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.913 02:50:41 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:02.913 02:50:41 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:02.913 02:50:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:02.913 02:50:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:02.913 02:50:41 -- common/autotest_common.sh@10 -- # set +x 00:06:02.913 ************************************ 00:06:02.913 START TEST skip_rpc 00:06:02.913 ************************************ 00:06:02.913 02:50:42 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:06:02.913 02:50:42 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=72363 00:06:02.913 02:50:42 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:02.913 02:50:42 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:02.913 02:50:42 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:03.172 [2024-04-23 02:50:42.097833] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:03.172 [2024-04-23 02:50:42.097918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:06:03.172 [2024-04-23 02:50:42.218791] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:03.172 [2024-04-23 02:50:42.240280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.172 [2024-04-23 02:50:42.281526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.442 02:50:47 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:08.442 02:50:47 -- common/autotest_common.sh@638 -- # local es=0 00:06:08.442 02:50:47 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:08.442 02:50:47 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:08.442 02:50:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.442 02:50:47 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:08.442 02:50:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:08.442 02:50:47 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:06:08.442 02:50:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:08.442 02:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.442 02:50:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:08.442 02:50:47 -- common/autotest_common.sh@641 -- # es=1 00:06:08.442 02:50:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:08.442 02:50:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:08.442 02:50:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:08.442 02:50:47 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:08.442 02:50:47 -- rpc/skip_rpc.sh@23 -- # killprocess 72363 00:06:08.442 02:50:47 -- common/autotest_common.sh@936 -- # '[' -z 72363 ']' 00:06:08.442 02:50:47 -- common/autotest_common.sh@940 -- # kill -0 72363 00:06:08.442 02:50:47 -- common/autotest_common.sh@941 -- # uname 00:06:08.442 02:50:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:08.442 02:50:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72363 00:06:08.442 02:50:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:08.442 killing process with pid 72363 00:06:08.442 02:50:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:08.442 02:50:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72363' 00:06:08.442 02:50:47 -- common/autotest_common.sh@955 -- # kill 72363 00:06:08.442 02:50:47 -- common/autotest_common.sh@960 -- # wait 72363 00:06:08.442 00:06:08.442 real 0m5.270s 00:06:08.442 user 0m4.969s 00:06:08.442 sys 0m0.206s 00:06:08.442 02:50:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:08.442 ************************************ 00:06:08.442 END TEST skip_rpc 00:06:08.442 ************************************ 00:06:08.442 02:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:08.443 02:50:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.443 02:50:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.443 02:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.443 ************************************ 00:06:08.443 START TEST skip_rpc_with_json 00:06:08.443 ************************************ 00:06:08.443 02:50:47 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=72448 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.443 02:50:47 -- rpc/skip_rpc.sh@31 -- # waitforlisten 72448 00:06:08.443 02:50:47 -- common/autotest_common.sh@817 -- # '[' -z 72448 ']' 00:06:08.443 02:50:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.443 02:50:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:08.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.443 02:50:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.443 02:50:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:08.443 02:50:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.443 [2024-04-23 02:50:47.491956] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:08.443 [2024-04-23 02:50:47.492047] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72448 ] 00:06:08.701 [2024-04-23 02:50:47.623249] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:08.701 [2024-04-23 02:50:47.637865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.701 [2024-04-23 02:50:47.674882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.637 02:50:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:09.637 02:50:48 -- common/autotest_common.sh@850 -- # return 0 00:06:09.637 02:50:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:09.637 02:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.637 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.637 [2024-04-23 02:50:48.511911] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:09.637 request: 00:06:09.637 { 00:06:09.637 "trtype": "tcp", 00:06:09.637 "method": "nvmf_get_transports", 00:06:09.637 "req_id": 1 00:06:09.637 } 00:06:09.637 Got JSON-RPC error response 00:06:09.637 response: 00:06:09.637 { 00:06:09.637 "code": -19, 00:06:09.637 "message": "No such device" 00:06:09.637 } 00:06:09.637 02:50:48 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:09.637 02:50:48 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:09.637 02:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.637 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.637 [2024-04-23 02:50:48.523935] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.637 02:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.637 02:50:48 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:09.637 02:50:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:09.637 02:50:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.637 02:50:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:09.637 02:50:48 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.637 { 00:06:09.637 "subsystems": [ 00:06:09.637 { 00:06:09.637 "subsystem": "keyring", 00:06:09.637 "config": [] 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "subsystem": "iobuf", 00:06:09.637 "config": [ 00:06:09.637 { 00:06:09.637 "method": "iobuf_set_options", 00:06:09.637 "params": { 00:06:09.637 "small_pool_count": 8192, 00:06:09.637 "large_pool_count": 1024, 00:06:09.637 "small_bufsize": 8192, 00:06:09.637 "large_bufsize": 135168 00:06:09.637 } 00:06:09.637 } 00:06:09.637 ] 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "subsystem": "sock", 00:06:09.637 "config": [ 00:06:09.637 { 00:06:09.637 "method": "sock_impl_set_options", 00:06:09.637 "params": { 00:06:09.637 "impl_name": "uring", 00:06:09.637 "recv_buf_size": 2097152, 00:06:09.637 "send_buf_size": 2097152, 00:06:09.637 "enable_recv_pipe": true, 00:06:09.637 "enable_quickack": false, 00:06:09.637 "enable_placement_id": 0, 00:06:09.637 "enable_zerocopy_send_server": false, 00:06:09.637 "enable_zerocopy_send_client": false, 00:06:09.637 "zerocopy_threshold": 0, 00:06:09.637 "tls_version": 0, 00:06:09.637 "enable_ktls": false 00:06:09.637 } 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "method": "sock_impl_set_options", 00:06:09.637 "params": { 00:06:09.637 "impl_name": "posix", 00:06:09.637 "recv_buf_size": 2097152, 00:06:09.637 "send_buf_size": 2097152, 00:06:09.637 "enable_recv_pipe": true, 00:06:09.637 "enable_quickack": false, 00:06:09.637 "enable_placement_id": 0, 00:06:09.637 "enable_zerocopy_send_server": true, 00:06:09.637 "enable_zerocopy_send_client": false, 00:06:09.637 "zerocopy_threshold": 0, 00:06:09.637 "tls_version": 0, 00:06:09.637 "enable_ktls": false 00:06:09.637 } 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "method": "sock_impl_set_options", 00:06:09.637 "params": { 00:06:09.637 "impl_name": "ssl", 00:06:09.637 "recv_buf_size": 4096, 00:06:09.637 "send_buf_size": 4096, 00:06:09.637 "enable_recv_pipe": true, 00:06:09.637 "enable_quickack": false, 00:06:09.637 "enable_placement_id": 0, 00:06:09.637 "enable_zerocopy_send_server": true, 00:06:09.637 "enable_zerocopy_send_client": false, 00:06:09.637 "zerocopy_threshold": 0, 00:06:09.637 "tls_version": 0, 00:06:09.637 "enable_ktls": false 00:06:09.637 } 00:06:09.637 } 00:06:09.637 ] 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "subsystem": "vmd", 00:06:09.637 "config": [] 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "subsystem": "accel", 00:06:09.637 "config": [ 00:06:09.637 { 00:06:09.637 "method": "accel_set_options", 00:06:09.637 "params": { 00:06:09.637 "small_cache_size": 128, 00:06:09.637 "large_cache_size": 16, 00:06:09.637 "task_count": 2048, 00:06:09.637 "sequence_count": 2048, 00:06:09.637 "buf_count": 2048 00:06:09.637 } 00:06:09.637 } 00:06:09.637 ] 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "subsystem": "bdev", 00:06:09.637 "config": [ 00:06:09.637 { 00:06:09.637 "method": "bdev_set_options", 00:06:09.637 "params": { 00:06:09.637 "bdev_io_pool_size": 65535, 00:06:09.637 "bdev_io_cache_size": 256, 00:06:09.637 "bdev_auto_examine": true, 00:06:09.637 "iobuf_small_cache_size": 128, 00:06:09.637 "iobuf_large_cache_size": 16 00:06:09.637 } 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "method": "bdev_raid_set_options", 00:06:09.637 "params": { 00:06:09.637 "process_window_size_kb": 1024 00:06:09.637 } 00:06:09.637 }, 00:06:09.637 { 00:06:09.637 "method": "bdev_iscsi_set_options", 00:06:09.637 "params": { 00:06:09.638 "timeout_sec": 30 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "bdev_nvme_set_options", 00:06:09.638 "params": { 00:06:09.638 "action_on_timeout": "none", 00:06:09.638 "timeout_us": 0, 00:06:09.638 "timeout_admin_us": 0, 00:06:09.638 "keep_alive_timeout_ms": 10000, 00:06:09.638 "arbitration_burst": 0, 00:06:09.638 "low_priority_weight": 0, 00:06:09.638 "medium_priority_weight": 0, 00:06:09.638 "high_priority_weight": 0, 00:06:09.638 "nvme_adminq_poll_period_us": 10000, 00:06:09.638 "nvme_ioq_poll_period_us": 0, 00:06:09.638 "io_queue_requests": 0, 00:06:09.638 "delay_cmd_submit": true, 00:06:09.638 "transport_retry_count": 4, 00:06:09.638 "bdev_retry_count": 3, 00:06:09.638 "transport_ack_timeout": 0, 00:06:09.638 "ctrlr_loss_timeout_sec": 0, 00:06:09.638 "reconnect_delay_sec": 0, 00:06:09.638 "fast_io_fail_timeout_sec": 0, 00:06:09.638 "disable_auto_failback": false, 00:06:09.638 "generate_uuids": false, 00:06:09.638 "transport_tos": 0, 00:06:09.638 "nvme_error_stat": false, 00:06:09.638 "rdma_srq_size": 0, 00:06:09.638 "io_path_stat": false, 00:06:09.638 "allow_accel_sequence": false, 00:06:09.638 "rdma_max_cq_size": 0, 00:06:09.638 "rdma_cm_event_timeout_ms": 0, 00:06:09.638 "dhchap_digests": [ 00:06:09.638 "sha256", 00:06:09.638 "sha384", 00:06:09.638 "sha512" 00:06:09.638 ], 00:06:09.638 "dhchap_dhgroups": [ 00:06:09.638 "null", 00:06:09.638 "ffdhe2048", 00:06:09.638 "ffdhe3072", 00:06:09.638 "ffdhe4096", 00:06:09.638 "ffdhe6144", 00:06:09.638 "ffdhe8192" 00:06:09.638 ] 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "bdev_nvme_set_hotplug", 00:06:09.638 "params": { 00:06:09.638 "period_us": 100000, 00:06:09.638 "enable": false 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "bdev_wait_for_examine" 00:06:09.638 } 00:06:09.638 ] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "scsi", 00:06:09.638 "config": null 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "scheduler", 00:06:09.638 "config": [ 00:06:09.638 { 00:06:09.638 "method": "framework_set_scheduler", 00:06:09.638 "params": { 00:06:09.638 "name": "static" 00:06:09.638 } 00:06:09.638 } 00:06:09.638 ] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "vhost_scsi", 00:06:09.638 "config": [] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "vhost_blk", 00:06:09.638 "config": [] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "ublk", 00:06:09.638 "config": [] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "nbd", 00:06:09.638 "config": [] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "nvmf", 00:06:09.638 "config": [ 00:06:09.638 { 00:06:09.638 "method": "nvmf_set_config", 00:06:09.638 "params": { 00:06:09.638 "discovery_filter": "match_any", 00:06:09.638 "admin_cmd_passthru": { 00:06:09.638 "identify_ctrlr": false 00:06:09.638 } 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "nvmf_set_max_subsystems", 00:06:09.638 "params": { 00:06:09.638 "max_subsystems": 1024 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "nvmf_set_crdt", 00:06:09.638 "params": { 00:06:09.638 "crdt1": 0, 00:06:09.638 "crdt2": 0, 00:06:09.638 "crdt3": 0 00:06:09.638 } 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "method": "nvmf_create_transport", 00:06:09.638 "params": { 00:06:09.638 "trtype": "TCP", 00:06:09.638 "max_queue_depth": 128, 00:06:09.638 "max_io_qpairs_per_ctrlr": 127, 00:06:09.638 "in_capsule_data_size": 4096, 00:06:09.638 "max_io_size": 131072, 00:06:09.638 "io_unit_size": 131072, 00:06:09.638 "max_aq_depth": 128, 00:06:09.638 "num_shared_buffers": 511, 00:06:09.638 "buf_cache_size": 4294967295, 00:06:09.638 "dif_insert_or_strip": false, 00:06:09.638 "zcopy": false, 00:06:09.638 "c2h_success": true, 00:06:09.638 "sock_priority": 0, 00:06:09.638 "abort_timeout_sec": 1, 00:06:09.638 "ack_timeout": 0, 00:06:09.638 "data_wr_pool_size": 0 00:06:09.638 } 00:06:09.638 } 00:06:09.638 ] 00:06:09.638 }, 00:06:09.638 { 00:06:09.638 "subsystem": "iscsi", 00:06:09.638 "config": [ 00:06:09.638 { 00:06:09.638 "method": "iscsi_set_options", 00:06:09.638 "params": { 00:06:09.638 "node_base": "iqn.2016-06.io.spdk", 00:06:09.638 "max_sessions": 128, 00:06:09.638 "max_connections_per_session": 2, 00:06:09.638 "max_queue_depth": 64, 00:06:09.638 "default_time2wait": 2, 00:06:09.638 "default_time2retain": 20, 00:06:09.638 "first_burst_length": 8192, 00:06:09.638 "immediate_data": true, 00:06:09.638 "allow_duplicated_isid": false, 00:06:09.638 "error_recovery_level": 0, 00:06:09.638 "nop_timeout": 60, 00:06:09.638 "nop_in_interval": 30, 00:06:09.638 "disable_chap": false, 00:06:09.638 "require_chap": false, 00:06:09.638 "mutual_chap": false, 00:06:09.638 "chap_group": 0, 00:06:09.638 "max_large_datain_per_connection": 64, 00:06:09.638 "max_r2t_per_connection": 4, 00:06:09.638 "pdu_pool_size": 36864, 00:06:09.638 "immediate_data_pool_size": 16384, 00:06:09.638 "data_out_pool_size": 2048 00:06:09.638 } 00:06:09.638 } 00:06:09.638 ] 00:06:09.638 } 00:06:09.638 ] 00:06:09.638 } 00:06:09.638 02:50:48 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:09.638 02:50:48 -- rpc/skip_rpc.sh@40 -- # killprocess 72448 00:06:09.638 02:50:48 -- common/autotest_common.sh@936 -- # '[' -z 72448 ']' 00:06:09.638 02:50:48 -- common/autotest_common.sh@940 -- # kill -0 72448 00:06:09.638 02:50:48 -- common/autotest_common.sh@941 -- # uname 00:06:09.638 02:50:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.638 02:50:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72448 00:06:09.638 02:50:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.638 02:50:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.638 killing process with pid 72448 00:06:09.638 02:50:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72448' 00:06:09.638 02:50:48 -- common/autotest_common.sh@955 -- # kill 72448 00:06:09.638 02:50:48 -- common/autotest_common.sh@960 -- # wait 72448 00:06:09.897 02:50:48 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:09.898 02:50:48 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=72475 00:06:09.898 02:50:48 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:15.169 02:50:53 -- rpc/skip_rpc.sh@50 -- # killprocess 72475 00:06:15.169 02:50:53 -- common/autotest_common.sh@936 -- # '[' -z 72475 ']' 00:06:15.169 02:50:53 -- common/autotest_common.sh@940 -- # kill -0 72475 00:06:15.169 02:50:53 -- common/autotest_common.sh@941 -- # uname 00:06:15.169 02:50:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:15.169 02:50:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72475 00:06:15.169 02:50:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:15.169 02:50:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:15.169 killing process with pid 72475 00:06:15.169 02:50:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72475' 00:06:15.169 02:50:53 -- common/autotest_common.sh@955 -- # kill 72475 00:06:15.169 02:50:53 -- common/autotest_common.sh@960 -- # wait 72475 00:06:15.169 02:50:54 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:15.169 02:50:54 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:15.169 00:06:15.169 real 0m6.768s 00:06:15.169 user 0m6.751s 00:06:15.169 sys 0m0.440s 00:06:15.169 02:50:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.169 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.169 ************************************ 00:06:15.169 END TEST skip_rpc_with_json 00:06:15.169 ************************************ 00:06:15.169 02:50:54 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:15.169 02:50:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.169 02:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.169 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.169 ************************************ 00:06:15.169 START TEST skip_rpc_with_delay 00:06:15.169 ************************************ 00:06:15.169 02:50:54 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:06:15.169 02:50:54 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.169 02:50:54 -- common/autotest_common.sh@638 -- # local es=0 00:06:15.169 02:50:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.169 02:50:54 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.169 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.169 02:50:54 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.169 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.169 02:50:54 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.170 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.170 02:50:54 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.170 02:50:54 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:15.170 02:50:54 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:15.429 [2024-04-23 02:50:54.370451] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:15.429 [2024-04-23 02:50:54.370582] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:15.429 02:50:54 -- common/autotest_common.sh@641 -- # es=1 00:06:15.429 02:50:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:15.429 02:50:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:15.429 02:50:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:15.429 00:06:15.429 real 0m0.084s 00:06:15.429 user 0m0.056s 00:06:15.429 sys 0m0.026s 00:06:15.429 02:50:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:15.429 ************************************ 00:06:15.429 END TEST skip_rpc_with_delay 00:06:15.429 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 ************************************ 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@77 -- # uname 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:15.429 02:50:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:15.429 02:50:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:15.429 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 ************************************ 00:06:15.429 START TEST exit_on_failed_rpc_init 00:06:15.429 ************************************ 00:06:15.429 02:50:54 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=72593 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.429 02:50:54 -- rpc/skip_rpc.sh@63 -- # waitforlisten 72593 00:06:15.429 02:50:54 -- common/autotest_common.sh@817 -- # '[' -z 72593 ']' 00:06:15.429 02:50:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.429 02:50:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:15.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.429 02:50:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.429 02:50:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:15.429 02:50:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.429 [2024-04-23 02:50:54.568603] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:15.429 [2024-04-23 02:50:54.568699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72593 ] 00:06:15.688 [2024-04-23 02:50:54.689783] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.688 [2024-04-23 02:50:54.707753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.688 [2024-04-23 02:50:54.742086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.947 02:50:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:15.947 02:50:54 -- common/autotest_common.sh@850 -- # return 0 00:06:15.947 02:50:54 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:15.947 02:50:54 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.947 02:50:54 -- common/autotest_common.sh@638 -- # local es=0 00:06:15.947 02:50:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.947 02:50:54 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.947 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.947 02:50:54 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.947 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.947 02:50:54 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.947 02:50:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:15.947 02:50:54 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:15.947 02:50:54 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:15.947 02:50:54 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:15.947 [2024-04-23 02:50:54.957347] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:15.947 [2024-04-23 02:50:54.957437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72598 ] 00:06:15.947 [2024-04-23 02:50:55.078170] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:15.947 [2024-04-23 02:50:55.100019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.206 [2024-04-23 02:50:55.141661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.206 [2024-04-23 02:50:55.141764] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:16.206 [2024-04-23 02:50:55.141782] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:16.206 [2024-04-23 02:50:55.141791] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:16.206 02:50:55 -- common/autotest_common.sh@641 -- # es=234 00:06:16.206 02:50:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:16.206 02:50:55 -- common/autotest_common.sh@650 -- # es=106 00:06:16.206 02:50:55 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:16.206 02:50:55 -- common/autotest_common.sh@658 -- # es=1 00:06:16.206 02:50:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:16.206 02:50:55 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:16.206 02:50:55 -- rpc/skip_rpc.sh@70 -- # killprocess 72593 00:06:16.206 02:50:55 -- common/autotest_common.sh@936 -- # '[' -z 72593 ']' 00:06:16.206 02:50:55 -- common/autotest_common.sh@940 -- # kill -0 72593 00:06:16.206 02:50:55 -- common/autotest_common.sh@941 -- # uname 00:06:16.206 02:50:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:16.206 02:50:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72593 00:06:16.206 killing process with pid 72593 00:06:16.206 02:50:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:16.206 02:50:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:16.206 02:50:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72593' 00:06:16.206 02:50:55 -- common/autotest_common.sh@955 -- # kill 72593 00:06:16.206 02:50:55 -- common/autotest_common.sh@960 -- # wait 72593 00:06:16.464 00:06:16.464 real 0m0.951s 00:06:16.464 user 0m1.077s 00:06:16.464 sys 0m0.271s 00:06:16.464 02:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.464 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.464 ************************************ 00:06:16.464 END TEST exit_on_failed_rpc_init 00:06:16.464 ************************************ 00:06:16.464 02:50:55 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.464 ************************************ 00:06:16.464 END TEST skip_rpc 00:06:16.464 ************************************ 00:06:16.464 00:06:16.464 real 0m13.627s 00:06:16.464 user 0m13.071s 00:06:16.464 sys 0m1.209s 00:06:16.464 02:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.464 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.464 02:50:55 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.464 02:50:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.464 02:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.464 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.723 ************************************ 00:06:16.723 START TEST rpc_client 00:06:16.723 ************************************ 00:06:16.723 02:50:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.723 * Looking for test storage... 00:06:16.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:16.723 02:50:55 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:16.723 OK 00:06:16.723 02:50:55 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.723 00:06:16.723 real 0m0.105s 00:06:16.723 user 0m0.047s 00:06:16.723 sys 0m0.063s 00:06:16.723 02:50:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:16.723 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.723 ************************************ 00:06:16.723 END TEST rpc_client 00:06:16.723 ************************************ 00:06:16.723 02:50:55 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.723 02:50:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:16.723 02:50:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:16.723 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.723 ************************************ 00:06:16.723 START TEST json_config 00:06:16.723 ************************************ 00:06:16.723 02:50:55 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.981 02:50:55 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:16.981 02:50:55 -- nvmf/common.sh@7 -- # uname -s 00:06:16.981 02:50:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:16.981 02:50:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:16.981 02:50:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:16.981 02:50:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:16.981 02:50:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:16.981 02:50:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:16.981 02:50:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:16.981 02:50:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:16.981 02:50:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:16.981 02:50:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:16.981 02:50:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:06:16.981 02:50:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:06:16.981 02:50:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:16.981 02:50:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:16.981 02:50:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:16.981 02:50:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:16.981 02:50:55 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.981 02:50:55 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.981 02:50:55 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.981 02:50:55 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.981 02:50:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.981 02:50:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.981 02:50:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.981 02:50:55 -- paths/export.sh@5 -- # export PATH 00:06:16.981 02:50:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.981 02:50:55 -- nvmf/common.sh@47 -- # : 0 00:06:16.981 02:50:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:16.981 02:50:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:16.981 02:50:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:16.981 02:50:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:16.981 02:50:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:16.981 02:50:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:16.981 02:50:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:16.982 02:50:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:16.982 02:50:55 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:16.982 02:50:55 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:16.982 02:50:55 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:16.982 02:50:55 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:16.982 02:50:55 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:16.982 02:50:55 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:16.982 02:50:55 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:16.982 02:50:55 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:16.982 02:50:55 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:16.982 02:50:55 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:16.982 02:50:55 -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:16.982 02:50:55 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:16.982 02:50:55 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:16.982 INFO: JSON configuration test init 00:06:16.982 02:50:55 -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:16.982 02:50:55 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:16.982 02:50:55 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:16.982 02:50:55 -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:16.982 02:50:55 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:16.982 02:50:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.982 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.982 02:50:55 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:16.982 02:50:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:16.982 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.982 02:50:55 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:16.982 02:50:55 -- json_config/common.sh@9 -- # local app=target 00:06:16.982 02:50:55 -- json_config/common.sh@10 -- # shift 00:06:16.982 02:50:55 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:16.982 02:50:55 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:16.982 02:50:55 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:16.982 02:50:55 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.982 02:50:55 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:16.982 02:50:55 -- json_config/common.sh@22 -- # app_pid["$app"]=72727 00:06:16.982 02:50:55 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:16.982 Waiting for target to run... 00:06:16.982 02:50:55 -- json_config/common.sh@25 -- # waitforlisten 72727 /var/tmp/spdk_tgt.sock 00:06:16.982 02:50:55 -- common/autotest_common.sh@817 -- # '[' -z 72727 ']' 00:06:16.982 02:50:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:16.982 02:50:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:16.982 02:50:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:16.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:16.982 02:50:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:16.982 02:50:55 -- common/autotest_common.sh@10 -- # set +x 00:06:16.982 02:50:55 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:16.982 [2024-04-23 02:50:56.007801] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:16.982 [2024-04-23 02:50:56.008367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72727 ] 00:06:17.240 [2024-04-23 02:50:56.298968] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:17.240 [2024-04-23 02:50:56.317967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.240 [2024-04-23 02:50:56.339350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.175 02:50:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.175 02:50:56 -- common/autotest_common.sh@850 -- # return 0 00:06:18.175 02:50:56 -- json_config/common.sh@26 -- # echo '' 00:06:18.175 00:06:18.175 02:50:56 -- json_config/json_config.sh@269 -- # create_accel_config 00:06:18.175 02:50:56 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:18.175 02:50:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.175 02:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.175 02:50:56 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:18.175 02:50:56 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:18.175 02:50:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:18.175 02:50:56 -- common/autotest_common.sh@10 -- # set +x 00:06:18.175 02:50:57 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:18.175 02:50:57 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:18.175 02:50:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:18.435 02:50:57 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:18.435 02:50:57 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:18.435 02:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.435 02:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:18.435 02:50:57 -- json_config/json_config.sh@45 -- # local ret=0 00:06:18.435 02:50:57 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:18.435 02:50:57 -- json_config/json_config.sh@46 -- # local enabled_types 00:06:18.435 02:50:57 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:18.435 02:50:57 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:18.435 02:50:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:18.694 02:50:57 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:18.694 02:50:57 -- json_config/json_config.sh@48 -- # local get_types 00:06:18.694 02:50:57 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:18.694 02:50:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:18.694 02:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:18.694 02:50:57 -- json_config/json_config.sh@55 -- # return 0 00:06:18.694 02:50:57 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:18.694 02:50:57 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:18.694 02:50:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:18.694 02:50:57 -- common/autotest_common.sh@10 -- # set +x 00:06:18.694 02:50:57 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:18.694 02:50:57 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:18.694 02:50:57 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.694 02:50:57 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:18.952 MallocForNvmf0 00:06:18.952 02:50:58 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:18.952 02:50:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:19.212 MallocForNvmf1 00:06:19.212 02:50:58 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.212 02:50:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:19.471 [2024-04-23 02:50:58.529205] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:19.471 02:50:58 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.471 02:50:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:19.729 02:50:58 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.729 02:50:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:19.988 02:50:58 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:19.988 02:50:58 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:20.247 02:50:59 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.247 02:50:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:20.506 [2024-04-23 02:50:59.417742] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:20.506 02:50:59 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:20.506 02:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:20.506 02:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 02:50:59 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:20.506 02:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:20.506 02:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.506 02:50:59 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:20.506 02:50:59 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:20.506 02:50:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:20.765 MallocBdevForConfigChangeCheck 00:06:20.765 02:50:59 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:20.765 02:50:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:20.765 02:50:59 -- common/autotest_common.sh@10 -- # set +x 00:06:20.765 02:50:59 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:20.765 02:50:59 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.024 INFO: shutting down applications... 00:06:21.024 02:51:00 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:21.024 02:51:00 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:21.024 02:51:00 -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:21.024 02:51:00 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:21.024 02:51:00 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:21.592 Calling clear_iscsi_subsystem 00:06:21.592 Calling clear_nvmf_subsystem 00:06:21.592 Calling clear_nbd_subsystem 00:06:21.592 Calling clear_ublk_subsystem 00:06:21.592 Calling clear_vhost_blk_subsystem 00:06:21.592 Calling clear_vhost_scsi_subsystem 00:06:21.592 Calling clear_bdev_subsystem 00:06:21.592 02:51:00 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:21.592 02:51:00 -- json_config/json_config.sh@343 -- # count=100 00:06:21.592 02:51:00 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:21.592 02:51:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:21.592 02:51:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:21.592 02:51:00 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:21.851 02:51:00 -- json_config/json_config.sh@345 -- # break 00:06:21.851 02:51:00 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:21.851 02:51:00 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:21.851 02:51:00 -- json_config/common.sh@31 -- # local app=target 00:06:21.851 02:51:00 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:21.851 02:51:00 -- json_config/common.sh@35 -- # [[ -n 72727 ]] 00:06:21.851 02:51:00 -- json_config/common.sh@38 -- # kill -SIGINT 72727 00:06:21.851 02:51:00 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:21.851 02:51:00 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.851 02:51:00 -- json_config/common.sh@41 -- # kill -0 72727 00:06:21.851 02:51:00 -- json_config/common.sh@45 -- # sleep 0.5 00:06:22.422 02:51:01 -- json_config/common.sh@40 -- # (( i++ )) 00:06:22.423 02:51:01 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:22.423 02:51:01 -- json_config/common.sh@41 -- # kill -0 72727 00:06:22.423 SPDK target shutdown done 00:06:22.423 INFO: relaunching applications... 00:06:22.423 02:51:01 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:22.423 02:51:01 -- json_config/common.sh@43 -- # break 00:06:22.423 02:51:01 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:22.423 02:51:01 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:22.423 02:51:01 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:22.423 02:51:01 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.423 02:51:01 -- json_config/common.sh@9 -- # local app=target 00:06:22.423 02:51:01 -- json_config/common.sh@10 -- # shift 00:06:22.423 02:51:01 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:22.423 02:51:01 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:22.423 02:51:01 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:22.423 02:51:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.423 02:51:01 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:22.423 Waiting for target to run... 00:06:22.423 02:51:01 -- json_config/common.sh@22 -- # app_pid["$app"]=72923 00:06:22.423 02:51:01 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:22.423 02:51:01 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:22.423 02:51:01 -- json_config/common.sh@25 -- # waitforlisten 72923 /var/tmp/spdk_tgt.sock 00:06:22.423 02:51:01 -- common/autotest_common.sh@817 -- # '[' -z 72923 ']' 00:06:22.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:22.423 02:51:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:22.423 02:51:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:22.423 02:51:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:22.423 02:51:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:22.423 02:51:01 -- common/autotest_common.sh@10 -- # set +x 00:06:22.423 [2024-04-23 02:51:01.512921] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:22.423 [2024-04-23 02:51:01.513020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72923 ] 00:06:22.682 [2024-04-23 02:51:01.816242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:22.682 [2024-04-23 02:51:01.837478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.941 [2024-04-23 02:51:01.866749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.200 [2024-04-23 02:51:02.162995] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:23.200 [2024-04-23 02:51:02.195039] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:23.459 00:06:23.459 INFO: Checking if target configuration is the same... 00:06:23.459 02:51:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:23.459 02:51:02 -- common/autotest_common.sh@850 -- # return 0 00:06:23.459 02:51:02 -- json_config/common.sh@26 -- # echo '' 00:06:23.459 02:51:02 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:23.459 02:51:02 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:23.459 02:51:02 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:23.459 02:51:02 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.459 02:51:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.459 + '[' 2 -ne 2 ']' 00:06:23.459 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:23.459 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:23.459 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:23.459 +++ basename /dev/fd/62 00:06:23.459 ++ mktemp /tmp/62.XXX 00:06:23.459 + tmp_file_1=/tmp/62.ybA 00:06:23.459 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.459 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:23.459 + tmp_file_2=/tmp/spdk_tgt_config.json.4Mx 00:06:23.459 + ret=0 00:06:23.459 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:23.718 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:23.718 + diff -u /tmp/62.ybA /tmp/spdk_tgt_config.json.4Mx 00:06:23.718 INFO: JSON config files are the same 00:06:23.718 + echo 'INFO: JSON config files are the same' 00:06:23.718 + rm /tmp/62.ybA /tmp/spdk_tgt_config.json.4Mx 00:06:23.718 + exit 0 00:06:23.977 INFO: changing configuration and checking if this can be detected... 00:06:23.977 02:51:02 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:23.977 02:51:02 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:23.977 02:51:02 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.977 02:51:02 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:23.977 02:51:03 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:23.977 02:51:03 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:23.977 02:51:03 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:23.977 + '[' 2 -ne 2 ']' 00:06:23.977 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:23.977 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:23.977 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:24.235 +++ basename /dev/fd/62 00:06:24.235 ++ mktemp /tmp/62.XXX 00:06:24.235 + tmp_file_1=/tmp/62.4ED 00:06:24.235 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:24.235 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:24.235 + tmp_file_2=/tmp/spdk_tgt_config.json.2xS 00:06:24.235 + ret=0 00:06:24.235 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.493 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:24.493 + diff -u /tmp/62.4ED /tmp/spdk_tgt_config.json.2xS 00:06:24.493 + ret=1 00:06:24.493 + echo '=== Start of file: /tmp/62.4ED ===' 00:06:24.493 + cat /tmp/62.4ED 00:06:24.493 + echo '=== End of file: /tmp/62.4ED ===' 00:06:24.493 + echo '' 00:06:24.493 + echo '=== Start of file: /tmp/spdk_tgt_config.json.2xS ===' 00:06:24.493 + cat /tmp/spdk_tgt_config.json.2xS 00:06:24.493 + echo '=== End of file: /tmp/spdk_tgt_config.json.2xS ===' 00:06:24.493 + echo '' 00:06:24.493 + rm /tmp/62.4ED /tmp/spdk_tgt_config.json.2xS 00:06:24.493 + exit 1 00:06:24.493 INFO: configuration change detected. 00:06:24.493 02:51:03 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:24.493 02:51:03 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:24.493 02:51:03 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:24.493 02:51:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:24.493 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.493 02:51:03 -- json_config/json_config.sh@307 -- # local ret=0 00:06:24.493 02:51:03 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:24.493 02:51:03 -- json_config/json_config.sh@317 -- # [[ -n 72923 ]] 00:06:24.493 02:51:03 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:24.493 02:51:03 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:24.493 02:51:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:24.493 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.493 02:51:03 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:24.493 02:51:03 -- json_config/json_config.sh@193 -- # uname -s 00:06:24.493 02:51:03 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:24.493 02:51:03 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:24.493 02:51:03 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:24.493 02:51:03 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:24.493 02:51:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:24.493 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.493 02:51:03 -- json_config/json_config.sh@323 -- # killprocess 72923 00:06:24.493 02:51:03 -- common/autotest_common.sh@936 -- # '[' -z 72923 ']' 00:06:24.493 02:51:03 -- common/autotest_common.sh@940 -- # kill -0 72923 00:06:24.493 02:51:03 -- common/autotest_common.sh@941 -- # uname 00:06:24.493 02:51:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:24.493 02:51:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72923 00:06:24.493 killing process with pid 72923 00:06:24.493 02:51:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:24.493 02:51:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:24.493 02:51:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72923' 00:06:24.493 02:51:03 -- common/autotest_common.sh@955 -- # kill 72923 00:06:24.493 02:51:03 -- common/autotest_common.sh@960 -- # wait 72923 00:06:24.751 02:51:03 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:24.751 02:51:03 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:24.751 02:51:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:24.751 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.751 INFO: Success 00:06:24.751 02:51:03 -- json_config/json_config.sh@328 -- # return 0 00:06:24.751 02:51:03 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:24.751 ************************************ 00:06:24.751 END TEST json_config 00:06:24.751 ************************************ 00:06:24.751 00:06:24.751 real 0m7.980s 00:06:24.751 user 0m11.500s 00:06:24.751 sys 0m1.442s 00:06:24.751 02:51:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:24.751 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:24.751 02:51:03 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:24.751 02:51:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:24.751 02:51:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:24.751 02:51:03 -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 ************************************ 00:06:25.010 START TEST json_config_extra_key 00:06:25.010 ************************************ 00:06:25.010 02:51:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:25.010 02:51:04 -- nvmf/common.sh@7 -- # uname -s 00:06:25.010 02:51:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:25.010 02:51:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:25.010 02:51:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:25.010 02:51:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:25.010 02:51:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:25.010 02:51:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:25.010 02:51:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:25.010 02:51:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:25.010 02:51:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:25.010 02:51:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:25.010 02:51:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:06:25.010 02:51:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:06:25.010 02:51:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:25.010 02:51:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:25.010 02:51:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:25.010 02:51:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:25.010 02:51:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:25.010 02:51:04 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:25.010 02:51:04 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:25.010 02:51:04 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:25.010 02:51:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.010 02:51:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.010 02:51:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.010 02:51:04 -- paths/export.sh@5 -- # export PATH 00:06:25.010 02:51:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:25.010 02:51:04 -- nvmf/common.sh@47 -- # : 0 00:06:25.010 02:51:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:25.010 02:51:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:25.010 02:51:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:25.010 02:51:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:25.010 02:51:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:25.010 02:51:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:25.010 02:51:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:25.010 02:51:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:25.010 INFO: launching applications... 00:06:25.010 02:51:04 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.010 02:51:04 -- json_config/common.sh@9 -- # local app=target 00:06:25.010 02:51:04 -- json_config/common.sh@10 -- # shift 00:06:25.010 02:51:04 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:25.010 02:51:04 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:25.010 02:51:04 -- json_config/common.sh@15 -- # local app_extra_params= 00:06:25.010 02:51:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.010 02:51:04 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:25.010 02:51:04 -- json_config/common.sh@22 -- # app_pid["$app"]=73063 00:06:25.010 02:51:04 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:25.010 Waiting for target to run... 00:06:25.010 02:51:04 -- json_config/common.sh@25 -- # waitforlisten 73063 /var/tmp/spdk_tgt.sock 00:06:25.010 02:51:04 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:25.010 02:51:04 -- common/autotest_common.sh@817 -- # '[' -z 73063 ']' 00:06:25.010 02:51:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:25.010 02:51:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:25.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:25.010 02:51:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:25.010 02:51:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:25.010 02:51:04 -- common/autotest_common.sh@10 -- # set +x 00:06:25.010 [2024-04-23 02:51:04.089886] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:25.010 [2024-04-23 02:51:04.089987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73063 ] 00:06:25.268 [2024-04-23 02:51:04.382550] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:25.268 [2024-04-23 02:51:04.403313] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.526 [2024-04-23 02:51:04.427071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.092 00:06:26.092 INFO: shutting down applications... 00:06:26.092 02:51:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:26.092 02:51:05 -- common/autotest_common.sh@850 -- # return 0 00:06:26.092 02:51:05 -- json_config/common.sh@26 -- # echo '' 00:06:26.092 02:51:05 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:26.092 02:51:05 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:26.092 02:51:05 -- json_config/common.sh@31 -- # local app=target 00:06:26.092 02:51:05 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:26.092 02:51:05 -- json_config/common.sh@35 -- # [[ -n 73063 ]] 00:06:26.092 02:51:05 -- json_config/common.sh@38 -- # kill -SIGINT 73063 00:06:26.092 02:51:05 -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:26.092 02:51:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.092 02:51:05 -- json_config/common.sh@41 -- # kill -0 73063 00:06:26.092 02:51:05 -- json_config/common.sh@45 -- # sleep 0.5 00:06:26.658 02:51:05 -- json_config/common.sh@40 -- # (( i++ )) 00:06:26.658 02:51:05 -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:26.658 02:51:05 -- json_config/common.sh@41 -- # kill -0 73063 00:06:26.658 02:51:05 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:26.658 02:51:05 -- json_config/common.sh@43 -- # break 00:06:26.658 02:51:05 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:26.658 02:51:05 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:26.658 SPDK target shutdown done 00:06:26.658 02:51:05 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:26.658 Success 00:06:26.658 00:06:26.658 real 0m1.641s 00:06:26.658 user 0m1.497s 00:06:26.658 sys 0m0.312s 00:06:26.658 02:51:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:26.658 02:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.658 ************************************ 00:06:26.658 END TEST json_config_extra_key 00:06:26.658 ************************************ 00:06:26.658 02:51:05 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.658 02:51:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:26.658 02:51:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:26.658 02:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.658 ************************************ 00:06:26.658 START TEST alias_rpc 00:06:26.658 ************************************ 00:06:26.658 02:51:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:26.658 * Looking for test storage... 00:06:26.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:26.658 02:51:05 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:26.658 02:51:05 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=73138 00:06:26.658 02:51:05 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:26.658 02:51:05 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 73138 00:06:26.658 02:51:05 -- common/autotest_common.sh@817 -- # '[' -z 73138 ']' 00:06:26.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.658 02:51:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.658 02:51:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:26.658 02:51:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.658 02:51:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:26.658 02:51:05 -- common/autotest_common.sh@10 -- # set +x 00:06:26.917 [2024-04-23 02:51:05.851508] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:26.917 [2024-04-23 02:51:05.852293] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73138 ] 00:06:26.917 [2024-04-23 02:51:05.973481] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:26.917 [2024-04-23 02:51:05.992343] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.917 [2024-04-23 02:51:06.028680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.175 02:51:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:27.175 02:51:06 -- common/autotest_common.sh@850 -- # return 0 00:06:27.175 02:51:06 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:27.433 02:51:06 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 73138 00:06:27.433 02:51:06 -- common/autotest_common.sh@936 -- # '[' -z 73138 ']' 00:06:27.433 02:51:06 -- common/autotest_common.sh@940 -- # kill -0 73138 00:06:27.433 02:51:06 -- common/autotest_common.sh@941 -- # uname 00:06:27.433 02:51:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:27.433 02:51:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73138 00:06:27.433 killing process with pid 73138 00:06:27.433 02:51:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:27.433 02:51:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:27.433 02:51:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73138' 00:06:27.433 02:51:06 -- common/autotest_common.sh@955 -- # kill 73138 00:06:27.433 02:51:06 -- common/autotest_common.sh@960 -- # wait 73138 00:06:27.691 ************************************ 00:06:27.692 END TEST alias_rpc 00:06:27.692 ************************************ 00:06:27.692 00:06:27.692 real 0m1.025s 00:06:27.692 user 0m1.215s 00:06:27.692 sys 0m0.290s 00:06:27.692 02:51:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:27.692 02:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.692 02:51:06 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:27.692 02:51:06 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:27.692 02:51:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:27.692 02:51:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:27.692 02:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.692 ************************************ 00:06:27.692 START TEST spdkcli_tcp 00:06:27.692 ************************************ 00:06:27.692 02:51:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:27.955 * Looking for test storage... 00:06:27.955 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:27.955 02:51:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:27.955 02:51:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:27.955 02:51:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:06:27.955 02:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=73206 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 73206 00:06:27.955 02:51:06 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:27.955 02:51:06 -- common/autotest_common.sh@817 -- # '[' -z 73206 ']' 00:06:27.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.955 02:51:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.955 02:51:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:27.955 02:51:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.955 02:51:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:27.955 02:51:06 -- common/autotest_common.sh@10 -- # set +x 00:06:27.955 [2024-04-23 02:51:06.993357] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:27.955 [2024-04-23 02:51:06.993460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73206 ] 00:06:28.214 [2024-04-23 02:51:07.114646] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:28.214 [2024-04-23 02:51:07.133630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.214 [2024-04-23 02:51:07.169034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.214 [2024-04-23 02:51:07.169026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.214 02:51:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:28.214 02:51:07 -- common/autotest_common.sh@850 -- # return 0 00:06:28.214 02:51:07 -- spdkcli/tcp.sh@31 -- # socat_pid=73210 00:06:28.214 02:51:07 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:28.214 02:51:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:28.472 [ 00:06:28.472 "bdev_malloc_delete", 00:06:28.472 "bdev_malloc_create", 00:06:28.472 "bdev_null_resize", 00:06:28.472 "bdev_null_delete", 00:06:28.472 "bdev_null_create", 00:06:28.472 "bdev_nvme_cuse_unregister", 00:06:28.472 "bdev_nvme_cuse_register", 00:06:28.472 "bdev_opal_new_user", 00:06:28.472 "bdev_opal_set_lock_state", 00:06:28.472 "bdev_opal_delete", 00:06:28.472 "bdev_opal_get_info", 00:06:28.472 "bdev_opal_create", 00:06:28.472 "bdev_nvme_opal_revert", 00:06:28.472 "bdev_nvme_opal_init", 00:06:28.472 "bdev_nvme_send_cmd", 00:06:28.472 "bdev_nvme_get_path_iostat", 00:06:28.472 "bdev_nvme_get_mdns_discovery_info", 00:06:28.472 "bdev_nvme_stop_mdns_discovery", 00:06:28.472 "bdev_nvme_start_mdns_discovery", 00:06:28.472 "bdev_nvme_set_multipath_policy", 00:06:28.472 "bdev_nvme_set_preferred_path", 00:06:28.472 "bdev_nvme_get_io_paths", 00:06:28.472 "bdev_nvme_remove_error_injection", 00:06:28.472 "bdev_nvme_add_error_injection", 00:06:28.472 "bdev_nvme_get_discovery_info", 00:06:28.472 "bdev_nvme_stop_discovery", 00:06:28.472 "bdev_nvme_start_discovery", 00:06:28.472 "bdev_nvme_get_controller_health_info", 00:06:28.472 "bdev_nvme_disable_controller", 00:06:28.472 "bdev_nvme_enable_controller", 00:06:28.472 "bdev_nvme_reset_controller", 00:06:28.472 "bdev_nvme_get_transport_statistics", 00:06:28.472 "bdev_nvme_apply_firmware", 00:06:28.472 "bdev_nvme_detach_controller", 00:06:28.472 "bdev_nvme_get_controllers", 00:06:28.472 "bdev_nvme_attach_controller", 00:06:28.472 "bdev_nvme_set_hotplug", 00:06:28.472 "bdev_nvme_set_options", 00:06:28.472 "bdev_passthru_delete", 00:06:28.472 "bdev_passthru_create", 00:06:28.472 "bdev_lvol_grow_lvstore", 00:06:28.472 "bdev_lvol_get_lvols", 00:06:28.472 "bdev_lvol_get_lvstores", 00:06:28.472 "bdev_lvol_delete", 00:06:28.472 "bdev_lvol_set_read_only", 00:06:28.472 "bdev_lvol_resize", 00:06:28.472 "bdev_lvol_decouple_parent", 00:06:28.472 "bdev_lvol_inflate", 00:06:28.472 "bdev_lvol_rename", 00:06:28.472 "bdev_lvol_clone_bdev", 00:06:28.472 "bdev_lvol_clone", 00:06:28.472 "bdev_lvol_snapshot", 00:06:28.472 "bdev_lvol_create", 00:06:28.472 "bdev_lvol_delete_lvstore", 00:06:28.472 "bdev_lvol_rename_lvstore", 00:06:28.472 "bdev_lvol_create_lvstore", 00:06:28.472 "bdev_raid_set_options", 00:06:28.472 "bdev_raid_remove_base_bdev", 00:06:28.472 "bdev_raid_add_base_bdev", 00:06:28.472 "bdev_raid_delete", 00:06:28.472 "bdev_raid_create", 00:06:28.472 "bdev_raid_get_bdevs", 00:06:28.472 "bdev_error_inject_error", 00:06:28.472 "bdev_error_delete", 00:06:28.472 "bdev_error_create", 00:06:28.472 "bdev_split_delete", 00:06:28.472 "bdev_split_create", 00:06:28.472 "bdev_delay_delete", 00:06:28.472 "bdev_delay_create", 00:06:28.472 "bdev_delay_update_latency", 00:06:28.472 "bdev_zone_block_delete", 00:06:28.472 "bdev_zone_block_create", 00:06:28.472 "blobfs_create", 00:06:28.472 "blobfs_detect", 00:06:28.472 "blobfs_set_cache_size", 00:06:28.472 "bdev_aio_delete", 00:06:28.472 "bdev_aio_rescan", 00:06:28.472 "bdev_aio_create", 00:06:28.472 "bdev_ftl_set_property", 00:06:28.472 "bdev_ftl_get_properties", 00:06:28.472 "bdev_ftl_get_stats", 00:06:28.472 "bdev_ftl_unmap", 00:06:28.472 "bdev_ftl_unload", 00:06:28.472 "bdev_ftl_delete", 00:06:28.472 "bdev_ftl_load", 00:06:28.472 "bdev_ftl_create", 00:06:28.472 "bdev_virtio_attach_controller", 00:06:28.472 "bdev_virtio_scsi_get_devices", 00:06:28.472 "bdev_virtio_detach_controller", 00:06:28.472 "bdev_virtio_blk_set_hotplug", 00:06:28.473 "bdev_iscsi_delete", 00:06:28.473 "bdev_iscsi_create", 00:06:28.473 "bdev_iscsi_set_options", 00:06:28.473 "bdev_uring_delete", 00:06:28.473 "bdev_uring_rescan", 00:06:28.473 "bdev_uring_create", 00:06:28.473 "accel_error_inject_error", 00:06:28.473 "ioat_scan_accel_module", 00:06:28.473 "dsa_scan_accel_module", 00:06:28.473 "iaa_scan_accel_module", 00:06:28.473 "keyring_file_remove_key", 00:06:28.473 "keyring_file_add_key", 00:06:28.473 "iscsi_get_histogram", 00:06:28.473 "iscsi_enable_histogram", 00:06:28.473 "iscsi_set_options", 00:06:28.473 "iscsi_get_auth_groups", 00:06:28.473 "iscsi_auth_group_remove_secret", 00:06:28.473 "iscsi_auth_group_add_secret", 00:06:28.473 "iscsi_delete_auth_group", 00:06:28.473 "iscsi_create_auth_group", 00:06:28.473 "iscsi_set_discovery_auth", 00:06:28.473 "iscsi_get_options", 00:06:28.473 "iscsi_target_node_request_logout", 00:06:28.473 "iscsi_target_node_set_redirect", 00:06:28.473 "iscsi_target_node_set_auth", 00:06:28.473 "iscsi_target_node_add_lun", 00:06:28.473 "iscsi_get_stats", 00:06:28.473 "iscsi_get_connections", 00:06:28.473 "iscsi_portal_group_set_auth", 00:06:28.473 "iscsi_start_portal_group", 00:06:28.473 "iscsi_delete_portal_group", 00:06:28.473 "iscsi_create_portal_group", 00:06:28.473 "iscsi_get_portal_groups", 00:06:28.473 "iscsi_delete_target_node", 00:06:28.473 "iscsi_target_node_remove_pg_ig_maps", 00:06:28.473 "iscsi_target_node_add_pg_ig_maps", 00:06:28.473 "iscsi_create_target_node", 00:06:28.473 "iscsi_get_target_nodes", 00:06:28.473 "iscsi_delete_initiator_group", 00:06:28.473 "iscsi_initiator_group_remove_initiators", 00:06:28.473 "iscsi_initiator_group_add_initiators", 00:06:28.473 "iscsi_create_initiator_group", 00:06:28.473 "iscsi_get_initiator_groups", 00:06:28.473 "nvmf_set_crdt", 00:06:28.473 "nvmf_set_config", 00:06:28.473 "nvmf_set_max_subsystems", 00:06:28.473 "nvmf_subsystem_get_listeners", 00:06:28.473 "nvmf_subsystem_get_qpairs", 00:06:28.473 "nvmf_subsystem_get_controllers", 00:06:28.473 "nvmf_get_stats", 00:06:28.473 "nvmf_get_transports", 00:06:28.473 "nvmf_create_transport", 00:06:28.473 "nvmf_get_targets", 00:06:28.473 "nvmf_delete_target", 00:06:28.473 "nvmf_create_target", 00:06:28.473 "nvmf_subsystem_allow_any_host", 00:06:28.473 "nvmf_subsystem_remove_host", 00:06:28.473 "nvmf_subsystem_add_host", 00:06:28.473 "nvmf_ns_remove_host", 00:06:28.473 "nvmf_ns_add_host", 00:06:28.473 "nvmf_subsystem_remove_ns", 00:06:28.473 "nvmf_subsystem_add_ns", 00:06:28.473 "nvmf_subsystem_listener_set_ana_state", 00:06:28.473 "nvmf_discovery_get_referrals", 00:06:28.473 "nvmf_discovery_remove_referral", 00:06:28.473 "nvmf_discovery_add_referral", 00:06:28.473 "nvmf_subsystem_remove_listener", 00:06:28.473 "nvmf_subsystem_add_listener", 00:06:28.473 "nvmf_delete_subsystem", 00:06:28.473 "nvmf_create_subsystem", 00:06:28.473 "nvmf_get_subsystems", 00:06:28.473 "env_dpdk_get_mem_stats", 00:06:28.473 "nbd_get_disks", 00:06:28.473 "nbd_stop_disk", 00:06:28.473 "nbd_start_disk", 00:06:28.473 "ublk_recover_disk", 00:06:28.473 "ublk_get_disks", 00:06:28.473 "ublk_stop_disk", 00:06:28.473 "ublk_start_disk", 00:06:28.473 "ublk_destroy_target", 00:06:28.473 "ublk_create_target", 00:06:28.473 "virtio_blk_create_transport", 00:06:28.473 "virtio_blk_get_transports", 00:06:28.473 "vhost_controller_set_coalescing", 00:06:28.473 "vhost_get_controllers", 00:06:28.473 "vhost_delete_controller", 00:06:28.473 "vhost_create_blk_controller", 00:06:28.473 "vhost_scsi_controller_remove_target", 00:06:28.473 "vhost_scsi_controller_add_target", 00:06:28.473 "vhost_start_scsi_controller", 00:06:28.473 "vhost_create_scsi_controller", 00:06:28.473 "thread_set_cpumask", 00:06:28.473 "framework_get_scheduler", 00:06:28.473 "framework_set_scheduler", 00:06:28.473 "framework_get_reactors", 00:06:28.473 "thread_get_io_channels", 00:06:28.473 "thread_get_pollers", 00:06:28.473 "thread_get_stats", 00:06:28.473 "framework_monitor_context_switch", 00:06:28.473 "spdk_kill_instance", 00:06:28.473 "log_enable_timestamps", 00:06:28.473 "log_get_flags", 00:06:28.473 "log_clear_flag", 00:06:28.473 "log_set_flag", 00:06:28.473 "log_get_level", 00:06:28.473 "log_set_level", 00:06:28.473 "log_get_print_level", 00:06:28.473 "log_set_print_level", 00:06:28.473 "framework_enable_cpumask_locks", 00:06:28.473 "framework_disable_cpumask_locks", 00:06:28.473 "framework_wait_init", 00:06:28.473 "framework_start_init", 00:06:28.473 "scsi_get_devices", 00:06:28.473 "bdev_get_histogram", 00:06:28.473 "bdev_enable_histogram", 00:06:28.473 "bdev_set_qos_limit", 00:06:28.473 "bdev_set_qd_sampling_period", 00:06:28.473 "bdev_get_bdevs", 00:06:28.473 "bdev_reset_iostat", 00:06:28.473 "bdev_get_iostat", 00:06:28.473 "bdev_examine", 00:06:28.473 "bdev_wait_for_examine", 00:06:28.473 "bdev_set_options", 00:06:28.473 "notify_get_notifications", 00:06:28.473 "notify_get_types", 00:06:28.473 "accel_get_stats", 00:06:28.473 "accel_set_options", 00:06:28.473 "accel_set_driver", 00:06:28.473 "accel_crypto_key_destroy", 00:06:28.473 "accel_crypto_keys_get", 00:06:28.473 "accel_crypto_key_create", 00:06:28.473 "accel_assign_opc", 00:06:28.473 "accel_get_module_info", 00:06:28.473 "accel_get_opc_assignments", 00:06:28.473 "vmd_rescan", 00:06:28.473 "vmd_remove_device", 00:06:28.473 "vmd_enable", 00:06:28.473 "sock_set_default_impl", 00:06:28.473 "sock_impl_set_options", 00:06:28.473 "sock_impl_get_options", 00:06:28.473 "iobuf_get_stats", 00:06:28.473 "iobuf_set_options", 00:06:28.473 "framework_get_pci_devices", 00:06:28.473 "framework_get_config", 00:06:28.473 "framework_get_subsystems", 00:06:28.473 "trace_get_info", 00:06:28.473 "trace_get_tpoint_group_mask", 00:06:28.473 "trace_disable_tpoint_group", 00:06:28.473 "trace_enable_tpoint_group", 00:06:28.473 "trace_clear_tpoint_mask", 00:06:28.473 "trace_set_tpoint_mask", 00:06:28.473 "keyring_get_keys", 00:06:28.473 "spdk_get_version", 00:06:28.473 "rpc_get_methods" 00:06:28.473 ] 00:06:28.473 02:51:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:28.473 02:51:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:06:28.473 02:51:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.732 02:51:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:28.732 02:51:07 -- spdkcli/tcp.sh@38 -- # killprocess 73206 00:06:28.732 02:51:07 -- common/autotest_common.sh@936 -- # '[' -z 73206 ']' 00:06:28.732 02:51:07 -- common/autotest_common.sh@940 -- # kill -0 73206 00:06:28.732 02:51:07 -- common/autotest_common.sh@941 -- # uname 00:06:28.732 02:51:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:28.732 02:51:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73206 00:06:28.732 02:51:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:28.732 killing process with pid 73206 00:06:28.732 02:51:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:28.732 02:51:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73206' 00:06:28.732 02:51:07 -- common/autotest_common.sh@955 -- # kill 73206 00:06:28.732 02:51:07 -- common/autotest_common.sh@960 -- # wait 73206 00:06:28.992 00:06:28.992 real 0m1.060s 00:06:28.992 user 0m1.868s 00:06:28.992 sys 0m0.347s 00:06:28.992 02:51:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:28.992 02:51:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.992 ************************************ 00:06:28.992 END TEST spdkcli_tcp 00:06:28.992 ************************************ 00:06:28.992 02:51:07 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.992 02:51:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:28.992 02:51:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:28.992 02:51:07 -- common/autotest_common.sh@10 -- # set +x 00:06:28.992 ************************************ 00:06:28.992 START TEST dpdk_mem_utility 00:06:28.992 ************************************ 00:06:28.992 02:51:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:28.992 * Looking for test storage... 00:06:28.992 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:28.992 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:28.992 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=73289 00:06:28.992 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:28.992 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 73289 00:06:28.992 02:51:08 -- common/autotest_common.sh@817 -- # '[' -z 73289 ']' 00:06:28.992 02:51:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.992 02:51:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:28.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.992 02:51:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.992 02:51:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:28.992 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.250 [2024-04-23 02:51:08.164207] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:29.250 [2024-04-23 02:51:08.164324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73289 ] 00:06:29.250 [2024-04-23 02:51:08.281368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:29.250 [2024-04-23 02:51:08.296531] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.251 [2024-04-23 02:51:08.333166] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.511 02:51:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:29.512 02:51:08 -- common/autotest_common.sh@850 -- # return 0 00:06:29.512 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:29.512 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:29.512 02:51:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:29.512 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.512 { 00:06:29.512 "filename": "/tmp/spdk_mem_dump.txt" 00:06:29.512 } 00:06:29.512 02:51:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:29.512 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:29.512 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:29.512 1 heaps totaling size 814.000000 MiB 00:06:29.512 size: 814.000000 MiB heap id: 0 00:06:29.512 end heaps---------- 00:06:29.512 8 mempools totaling size 598.116089 MiB 00:06:29.512 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:29.512 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:29.512 size: 84.521057 MiB name: bdev_io_73289 00:06:29.512 size: 51.011292 MiB name: evtpool_73289 00:06:29.512 size: 50.003479 MiB name: msgpool_73289 00:06:29.512 size: 21.763794 MiB name: PDU_Pool 00:06:29.512 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:29.512 size: 0.026123 MiB name: Session_Pool 00:06:29.512 end mempools------- 00:06:29.512 6 memzones totaling size 4.142822 MiB 00:06:29.512 size: 1.000366 MiB name: RG_ring_0_73289 00:06:29.512 size: 1.000366 MiB name: RG_ring_1_73289 00:06:29.512 size: 1.000366 MiB name: RG_ring_4_73289 00:06:29.512 size: 1.000366 MiB name: RG_ring_5_73289 00:06:29.512 size: 0.125366 MiB name: RG_ring_2_73289 00:06:29.512 size: 0.015991 MiB name: RG_ring_3_73289 00:06:29.512 end memzones------- 00:06:29.512 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:29.512 heap id: 0 total size: 814.000000 MiB number of busy elements: 297 number of free elements: 15 00:06:29.512 list of free elements. size: 12.472473 MiB 00:06:29.512 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:29.512 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:29.512 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:29.512 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:29.512 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:29.512 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:29.512 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:29.512 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:29.512 element at address: 0x200000200000 with size: 0.833191 MiB 00:06:29.512 element at address: 0x20001aa00000 with size: 0.568787 MiB 00:06:29.512 element at address: 0x20000b200000 with size: 0.489807 MiB 00:06:29.512 element at address: 0x200000800000 with size: 0.486145 MiB 00:06:29.512 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:29.512 element at address: 0x200027e00000 with size: 0.395935 MiB 00:06:29.512 element at address: 0x200003a00000 with size: 0.347839 MiB 00:06:29.512 list of standard malloc elements. size: 199.264954 MiB 00:06:29.512 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:29.512 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:29.512 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:29.512 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:29.512 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:29.512 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:29.512 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:29.512 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:29.512 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:29.512 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087c740 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087c800 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087c980 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59180 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59240 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59300 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59480 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59540 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59600 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59780 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59840 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59900 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:06:29.512 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa919c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91a80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:29.513 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e65680 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c280 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:06:29.513 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:29.514 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:29.514 list of memzone associated elements. size: 602.262573 MiB 00:06:29.514 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:29.514 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:29.514 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:29.514 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:29.514 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:29.514 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_73289_0 00:06:29.514 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:29.514 associated memzone info: size: 48.002930 MiB name: MP_evtpool_73289_0 00:06:29.514 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:29.514 associated memzone info: size: 48.002930 MiB name: MP_msgpool_73289_0 00:06:29.514 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:29.514 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:29.514 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:29.514 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:29.514 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:29.514 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_73289 00:06:29.514 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:29.514 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_73289 00:06:29.514 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:29.514 associated memzone info: size: 1.007996 MiB name: MP_evtpool_73289 00:06:29.514 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:29.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:29.514 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:29.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:29.514 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:29.514 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:29.514 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:29.514 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:29.514 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:29.514 associated memzone info: size: 1.000366 MiB name: RG_ring_0_73289 00:06:29.514 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:29.514 associated memzone info: size: 1.000366 MiB name: RG_ring_1_73289 00:06:29.514 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:29.514 associated memzone info: size: 1.000366 MiB name: RG_ring_4_73289 00:06:29.514 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:29.514 associated memzone info: size: 1.000366 MiB name: RG_ring_5_73289 00:06:29.514 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:29.514 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_73289 00:06:29.514 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:29.514 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:29.514 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:29.514 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:29.514 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:29.514 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:29.514 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:29.514 associated memzone info: size: 0.125366 MiB name: RG_ring_2_73289 00:06:29.514 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:29.514 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:29.514 element at address: 0x200027e65740 with size: 0.023743 MiB 00:06:29.514 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:29.514 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:29.514 associated memzone info: size: 0.015991 MiB name: RG_ring_3_73289 00:06:29.514 element at address: 0x200027e6b880 with size: 0.002441 MiB 00:06:29.514 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:29.514 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:29.514 associated memzone info: size: 0.000183 MiB name: MP_msgpool_73289 00:06:29.514 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:29.514 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_73289 00:06:29.514 element at address: 0x200027e6c340 with size: 0.000305 MiB 00:06:29.514 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:29.514 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:29.514 02:51:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 73289 00:06:29.514 02:51:08 -- common/autotest_common.sh@936 -- # '[' -z 73289 ']' 00:06:29.514 02:51:08 -- common/autotest_common.sh@940 -- # kill -0 73289 00:06:29.514 02:51:08 -- common/autotest_common.sh@941 -- # uname 00:06:29.514 02:51:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:29.514 02:51:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73289 00:06:29.514 02:51:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:29.514 02:51:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:29.514 killing process with pid 73289 00:06:29.514 02:51:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73289' 00:06:29.514 02:51:08 -- common/autotest_common.sh@955 -- # kill 73289 00:06:29.514 02:51:08 -- common/autotest_common.sh@960 -- # wait 73289 00:06:29.789 00:06:29.789 real 0m0.876s 00:06:29.789 user 0m0.961s 00:06:29.789 sys 0m0.291s 00:06:29.789 02:51:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:29.789 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:29.789 ************************************ 00:06:29.789 END TEST dpdk_mem_utility 00:06:29.789 ************************************ 00:06:29.789 02:51:08 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:29.789 02:51:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:29.789 02:51:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.789 02:51:08 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST event 00:06:30.048 ************************************ 00:06:30.048 02:51:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:30.048 * Looking for test storage... 00:06:30.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:30.048 02:51:09 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:30.048 02:51:09 -- bdev/nbd_common.sh@6 -- # set -e 00:06:30.048 02:51:09 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.048 02:51:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:30.048 02:51:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.048 02:51:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.048 ************************************ 00:06:30.048 START TEST event_perf 00:06:30.048 ************************************ 00:06:30.048 02:51:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:30.048 Running I/O for 1 seconds...[2024-04-23 02:51:09.178726] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:30.048 [2024-04-23 02:51:09.178804] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73364 ] 00:06:30.307 [2024-04-23 02:51:09.298628] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:30.307 [2024-04-23 02:51:09.316475] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.307 [2024-04-23 02:51:09.350396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.307 [2024-04-23 02:51:09.350495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.307 [2024-04-23 02:51:09.350640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.307 [2024-04-23 02:51:09.350643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.242 Running I/O for 1 seconds... 00:06:31.242 lcore 0: 191979 00:06:31.242 lcore 1: 191979 00:06:31.242 lcore 2: 191979 00:06:31.242 lcore 3: 191979 00:06:31.500 done. 00:06:31.500 00:06:31.500 real 0m1.246s 00:06:31.500 user 0m4.074s 00:06:31.500 sys 0m0.049s 00:06:31.500 02:51:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:31.500 02:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.500 ************************************ 00:06:31.500 END TEST event_perf 00:06:31.500 ************************************ 00:06:31.500 02:51:10 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:31.500 02:51:10 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:31.500 02:51:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:31.500 02:51:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.500 ************************************ 00:06:31.500 START TEST event_reactor 00:06:31.500 ************************************ 00:06:31.500 02:51:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:31.500 [2024-04-23 02:51:10.535845] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:31.500 [2024-04-23 02:51:10.535923] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73407 ] 00:06:31.500 [2024-04-23 02:51:10.654692] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:31.758 [2024-04-23 02:51:10.673303] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.758 [2024-04-23 02:51:10.710900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.696 test_start 00:06:32.696 oneshot 00:06:32.696 tick 100 00:06:32.696 tick 100 00:06:32.696 tick 250 00:06:32.696 tick 100 00:06:32.696 tick 100 00:06:32.696 tick 250 00:06:32.696 tick 100 00:06:32.696 tick 500 00:06:32.696 tick 100 00:06:32.696 tick 100 00:06:32.696 tick 250 00:06:32.696 tick 100 00:06:32.696 tick 100 00:06:32.696 test_end 00:06:32.696 00:06:32.696 real 0m1.243s 00:06:32.696 user 0m1.099s 00:06:32.696 sys 0m0.040s 00:06:32.696 02:51:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:32.696 02:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:32.696 ************************************ 00:06:32.696 END TEST event_reactor 00:06:32.696 ************************************ 00:06:32.696 02:51:11 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.696 02:51:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:32.696 02:51:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:32.696 02:51:11 -- common/autotest_common.sh@10 -- # set +x 00:06:32.956 ************************************ 00:06:32.956 START TEST event_reactor_perf 00:06:32.956 ************************************ 00:06:32.956 02:51:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:32.956 [2024-04-23 02:51:11.898320] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:32.956 [2024-04-23 02:51:11.898404] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73446 ] 00:06:32.956 [2024-04-23 02:51:12.017770] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:32.956 [2024-04-23 02:51:12.037795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.956 [2024-04-23 02:51:12.085733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.333 test_start 00:06:34.333 test_end 00:06:34.333 Performance: 405189 events per second 00:06:34.333 ************************************ 00:06:34.333 END TEST event_reactor_perf 00:06:34.333 ************************************ 00:06:34.333 00:06:34.333 real 0m1.259s 00:06:34.333 user 0m1.106s 00:06:34.333 sys 0m0.045s 00:06:34.333 02:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:34.333 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 02:51:13 -- event/event.sh@49 -- # uname -s 00:06:34.333 02:51:13 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:34.333 02:51:13 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:34.333 02:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.333 02:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.333 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 ************************************ 00:06:34.333 START TEST event_scheduler 00:06:34.333 ************************************ 00:06:34.333 02:51:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:34.333 * Looking for test storage... 00:06:34.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:34.333 02:51:13 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:34.333 02:51:13 -- scheduler/scheduler.sh@35 -- # scheduler_pid=73507 00:06:34.333 02:51:13 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:34.333 02:51:13 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.333 02:51:13 -- scheduler/scheduler.sh@37 -- # waitforlisten 73507 00:06:34.333 02:51:13 -- common/autotest_common.sh@817 -- # '[' -z 73507 ']' 00:06:34.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.333 02:51:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.333 02:51:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:34.333 02:51:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.333 02:51:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:34.333 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.333 [2024-04-23 02:51:13.376139] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:34.333 [2024-04-23 02:51:13.376231] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73507 ] 00:06:34.592 [2024-04-23 02:51:13.494182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:34.592 [2024-04-23 02:51:13.509489] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.592 [2024-04-23 02:51:13.554110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.592 [2024-04-23 02:51:13.554243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.592 [2024-04-23 02:51:13.554367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.592 [2024-04-23 02:51:13.554369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.592 02:51:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:34.592 02:51:13 -- common/autotest_common.sh@850 -- # return 0 00:06:34.592 02:51:13 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:34.592 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.592 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.592 POWER: Env isn't set yet! 00:06:34.592 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:34.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.592 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.592 POWER: Attempting to initialise PSTAT power management... 00:06:34.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.592 POWER: Cannot set governor of lcore 0 to performance 00:06:34.592 POWER: Attempting to initialise AMD PSTATE power management... 00:06:34.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.592 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.592 POWER: Attempting to initialise CPPC power management... 00:06:34.592 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:34.592 POWER: Cannot set governor of lcore 0 to userspace 00:06:34.592 POWER: Attempting to initialise VM power management... 00:06:34.592 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:34.592 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:34.592 POWER: Unable to set Power Management Environment for lcore 0 00:06:34.592 [2024-04-23 02:51:13.636011] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:34.592 [2024-04-23 02:51:13.636309] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:34.592 [2024-04-23 02:51:13.636562] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:34.592 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.592 02:51:13 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:34.592 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.592 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.592 [2024-04-23 02:51:13.680087] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:34.592 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.592 02:51:13 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:34.592 02:51:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:34.592 02:51:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:34.592 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 ************************************ 00:06:34.852 START TEST scheduler_create_thread 00:06:34.852 ************************************ 00:06:34.852 02:51:13 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 2 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 3 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 4 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 5 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 6 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 7 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 8 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 9 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 10 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:34.852 02:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:34.852 02:51:13 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:34.852 02:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:34.852 02:51:13 -- common/autotest_common.sh@10 -- # set +x 00:06:36.230 02:51:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:36.230 02:51:15 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:36.230 02:51:15 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:36.230 02:51:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:36.230 02:51:15 -- common/autotest_common.sh@10 -- # set +x 00:06:37.614 02:51:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:37.614 00:06:37.614 real 0m2.609s 00:06:37.614 user 0m0.019s 00:06:37.614 sys 0m0.005s 00:06:37.614 02:51:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.614 02:51:16 -- common/autotest_common.sh@10 -- # set +x 00:06:37.614 ************************************ 00:06:37.614 END TEST scheduler_create_thread 00:06:37.614 ************************************ 00:06:37.614 02:51:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:37.614 02:51:16 -- scheduler/scheduler.sh@46 -- # killprocess 73507 00:06:37.614 02:51:16 -- common/autotest_common.sh@936 -- # '[' -z 73507 ']' 00:06:37.614 02:51:16 -- common/autotest_common.sh@940 -- # kill -0 73507 00:06:37.614 02:51:16 -- common/autotest_common.sh@941 -- # uname 00:06:37.614 02:51:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:37.614 02:51:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73507 00:06:37.614 02:51:16 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:37.614 02:51:16 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:37.614 killing process with pid 73507 00:06:37.614 02:51:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73507' 00:06:37.614 02:51:16 -- common/autotest_common.sh@955 -- # kill 73507 00:06:37.615 02:51:16 -- common/autotest_common.sh@960 -- # wait 73507 00:06:37.874 [2024-04-23 02:51:16.844579] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:37.874 00:06:37.874 real 0m3.748s 00:06:37.874 user 0m5.662s 00:06:37.874 sys 0m0.319s 00:06:37.874 02:51:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:37.874 02:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:37.874 ************************************ 00:06:37.874 END TEST event_scheduler 00:06:37.874 ************************************ 00:06:38.133 02:51:17 -- event/event.sh@51 -- # modprobe -n nbd 00:06:38.133 02:51:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:38.133 02:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.133 02:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.133 02:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.133 ************************************ 00:06:38.133 START TEST app_repeat 00:06:38.133 ************************************ 00:06:38.133 02:51:17 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:06:38.133 02:51:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.133 02:51:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.133 02:51:17 -- event/event.sh@13 -- # local nbd_list 00:06:38.133 02:51:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.133 02:51:17 -- event/event.sh@14 -- # local bdev_list 00:06:38.133 02:51:17 -- event/event.sh@15 -- # local repeat_times=4 00:06:38.133 02:51:17 -- event/event.sh@17 -- # modprobe nbd 00:06:38.133 02:51:17 -- event/event.sh@19 -- # repeat_pid=73608 00:06:38.133 02:51:17 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:38.133 02:51:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.133 Process app_repeat pid: 73608 00:06:38.133 02:51:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 73608' 00:06:38.133 02:51:17 -- event/event.sh@23 -- # for i in {0..2} 00:06:38.133 spdk_app_start Round 0 00:06:38.133 02:51:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:38.133 02:51:17 -- event/event.sh@25 -- # waitforlisten 73608 /var/tmp/spdk-nbd.sock 00:06:38.133 02:51:17 -- common/autotest_common.sh@817 -- # '[' -z 73608 ']' 00:06:38.133 02:51:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:38.133 02:51:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:38.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:38.133 02:51:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:38.133 02:51:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:38.133 02:51:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.133 [2024-04-23 02:51:17.156735] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:38.133 [2024-04-23 02:51:17.156813] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73608 ] 00:06:38.133 [2024-04-23 02:51:17.276255] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:38.392 [2024-04-23 02:51:17.295004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.392 [2024-04-23 02:51:17.337317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.392 [2024-04-23 02:51:17.337318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.392 02:51:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:38.392 02:51:17 -- common/autotest_common.sh@850 -- # return 0 00:06:38.392 02:51:17 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.651 Malloc0 00:06:38.651 02:51:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.916 Malloc1 00:06:38.916 02:51:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@12 -- # local i 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:38.916 02:51:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.175 /dev/nbd0 00:06:39.175 02:51:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.175 02:51:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.175 02:51:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:39.175 02:51:18 -- common/autotest_common.sh@855 -- # local i 00:06:39.175 02:51:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:39.175 02:51:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:39.176 02:51:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:39.176 02:51:18 -- common/autotest_common.sh@859 -- # break 00:06:39.176 02:51:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:39.176 02:51:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:39.176 02:51:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.176 1+0 records in 00:06:39.176 1+0 records out 00:06:39.176 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244181 s, 16.8 MB/s 00:06:39.176 02:51:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.176 02:51:18 -- common/autotest_common.sh@872 -- # size=4096 00:06:39.176 02:51:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.176 02:51:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:39.176 02:51:18 -- common/autotest_common.sh@875 -- # return 0 00:06:39.176 02:51:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.176 02:51:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.176 02:51:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.435 /dev/nbd1 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.435 02:51:18 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:39.435 02:51:18 -- common/autotest_common.sh@855 -- # local i 00:06:39.435 02:51:18 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:39.435 02:51:18 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:39.435 02:51:18 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:39.435 02:51:18 -- common/autotest_common.sh@859 -- # break 00:06:39.435 02:51:18 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:39.435 02:51:18 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:39.435 02:51:18 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.435 1+0 records in 00:06:39.435 1+0 records out 00:06:39.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324733 s, 12.6 MB/s 00:06:39.435 02:51:18 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.435 02:51:18 -- common/autotest_common.sh@872 -- # size=4096 00:06:39.435 02:51:18 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:39.435 02:51:18 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:39.435 02:51:18 -- common/autotest_common.sh@875 -- # return 0 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.435 02:51:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:39.694 { 00:06:39.694 "nbd_device": "/dev/nbd0", 00:06:39.694 "bdev_name": "Malloc0" 00:06:39.694 }, 00:06:39.694 { 00:06:39.694 "nbd_device": "/dev/nbd1", 00:06:39.694 "bdev_name": "Malloc1" 00:06:39.694 } 00:06:39.694 ]' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:39.694 { 00:06:39.694 "nbd_device": "/dev/nbd0", 00:06:39.694 "bdev_name": "Malloc0" 00:06:39.694 }, 00:06:39.694 { 00:06:39.694 "nbd_device": "/dev/nbd1", 00:06:39.694 "bdev_name": "Malloc1" 00:06:39.694 } 00:06:39.694 ]' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:39.694 /dev/nbd1' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:39.694 /dev/nbd1' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@65 -- # count=2 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@95 -- # count=2 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:39.694 256+0 records in 00:06:39.694 256+0 records out 00:06:39.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683458 s, 153 MB/s 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.694 02:51:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:39.695 256+0 records in 00:06:39.695 256+0 records out 00:06:39.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263122 s, 39.9 MB/s 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:39.695 256+0 records in 00:06:39.695 256+0 records out 00:06:39.695 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238904 s, 43.9 MB/s 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:39.695 02:51:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.954 02:51:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:39.954 02:51:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:39.954 02:51:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:39.954 02:51:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@41 -- # break 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.955 02:51:19 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@41 -- # break 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.523 02:51:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@65 -- # true 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.782 02:51:19 -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.782 02:51:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:41.041 02:51:19 -- event/event.sh@35 -- # sleep 3 00:06:41.041 [2024-04-23 02:51:20.089382] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:41.041 [2024-04-23 02:51:20.125607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.041 [2024-04-23 02:51:20.125632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.041 [2024-04-23 02:51:20.156871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:41.041 [2024-04-23 02:51:20.156951] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:44.329 02:51:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:44.329 spdk_app_start Round 1 00:06:44.329 02:51:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:44.329 02:51:22 -- event/event.sh@25 -- # waitforlisten 73608 /var/tmp/spdk-nbd.sock 00:06:44.329 02:51:22 -- common/autotest_common.sh@817 -- # '[' -z 73608 ']' 00:06:44.329 02:51:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:44.329 02:51:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:44.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:44.329 02:51:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:44.329 02:51:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:44.329 02:51:22 -- common/autotest_common.sh@10 -- # set +x 00:06:44.329 02:51:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:44.329 02:51:23 -- common/autotest_common.sh@850 -- # return 0 00:06:44.329 02:51:23 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.587 Malloc0 00:06:44.587 02:51:23 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:44.846 Malloc1 00:06:44.846 02:51:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:44.846 02:51:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.104 /dev/nbd0 00:06:45.104 02:51:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.104 02:51:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.104 02:51:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:45.104 02:51:24 -- common/autotest_common.sh@855 -- # local i 00:06:45.104 02:51:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:45.104 02:51:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:45.104 02:51:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:45.104 02:51:24 -- common/autotest_common.sh@859 -- # break 00:06:45.104 02:51:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:45.104 02:51:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:45.104 02:51:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.104 1+0 records in 00:06:45.104 1+0 records out 00:06:45.104 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223168 s, 18.4 MB/s 00:06:45.104 02:51:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.104 02:51:24 -- common/autotest_common.sh@872 -- # size=4096 00:06:45.104 02:51:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.104 02:51:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:45.104 02:51:24 -- common/autotest_common.sh@875 -- # return 0 00:06:45.104 02:51:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.104 02:51:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.104 02:51:24 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:45.363 /dev/nbd1 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.363 02:51:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:45.363 02:51:24 -- common/autotest_common.sh@855 -- # local i 00:06:45.363 02:51:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:45.363 02:51:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:45.363 02:51:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:45.363 02:51:24 -- common/autotest_common.sh@859 -- # break 00:06:45.363 02:51:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:45.363 02:51:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:45.363 02:51:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.363 1+0 records in 00:06:45.363 1+0 records out 00:06:45.363 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028378 s, 14.4 MB/s 00:06:45.363 02:51:24 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.363 02:51:24 -- common/autotest_common.sh@872 -- # size=4096 00:06:45.363 02:51:24 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:45.363 02:51:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:45.363 02:51:24 -- common/autotest_common.sh@875 -- # return 0 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.363 02:51:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:45.622 { 00:06:45.622 "nbd_device": "/dev/nbd0", 00:06:45.622 "bdev_name": "Malloc0" 00:06:45.622 }, 00:06:45.622 { 00:06:45.622 "nbd_device": "/dev/nbd1", 00:06:45.622 "bdev_name": "Malloc1" 00:06:45.622 } 00:06:45.622 ]' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:45.622 { 00:06:45.622 "nbd_device": "/dev/nbd0", 00:06:45.622 "bdev_name": "Malloc0" 00:06:45.622 }, 00:06:45.622 { 00:06:45.622 "nbd_device": "/dev/nbd1", 00:06:45.622 "bdev_name": "Malloc1" 00:06:45.622 } 00:06:45.622 ]' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:45.622 /dev/nbd1' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:45.622 /dev/nbd1' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@65 -- # count=2 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@95 -- # count=2 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:45.622 256+0 records in 00:06:45.622 256+0 records out 00:06:45.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00924087 s, 113 MB/s 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:45.622 256+0 records in 00:06:45.622 256+0 records out 00:06:45.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254401 s, 41.2 MB/s 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:45.622 256+0 records in 00:06:45.622 256+0 records out 00:06:45.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272713 s, 38.4 MB/s 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:45.622 02:51:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@51 -- # local i 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.623 02:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@41 -- # break 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:45.881 02:51:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@41 -- # break 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.140 02:51:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@65 -- # true 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.399 02:51:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.399 02:51:25 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:46.659 02:51:25 -- event/event.sh@35 -- # sleep 3 00:06:46.918 [2024-04-23 02:51:25.837405] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:46.918 [2024-04-23 02:51:25.867998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.918 [2024-04-23 02:51:25.868008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.918 [2024-04-23 02:51:25.897298] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:46.918 [2024-04-23 02:51:25.897366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.202 02:51:28 -- event/event.sh@23 -- # for i in {0..2} 00:06:50.202 spdk_app_start Round 2 00:06:50.202 02:51:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:50.202 02:51:28 -- event/event.sh@25 -- # waitforlisten 73608 /var/tmp/spdk-nbd.sock 00:06:50.202 02:51:28 -- common/autotest_common.sh@817 -- # '[' -z 73608 ']' 00:06:50.202 02:51:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.202 02:51:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:50.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.202 02:51:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.202 02:51:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:50.202 02:51:28 -- common/autotest_common.sh@10 -- # set +x 00:06:50.202 02:51:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:50.202 02:51:28 -- common/autotest_common.sh@850 -- # return 0 00:06:50.202 02:51:28 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.202 Malloc0 00:06:50.202 02:51:29 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:50.461 Malloc1 00:06:50.461 02:51:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.461 02:51:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.461 02:51:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.461 02:51:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:50.461 02:51:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.461 02:51:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@12 -- # local i 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.462 02:51:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:50.721 /dev/nbd0 00:06:50.721 02:51:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:50.721 02:51:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:50.721 02:51:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:06:50.721 02:51:29 -- common/autotest_common.sh@855 -- # local i 00:06:50.721 02:51:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:50.721 02:51:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:50.721 02:51:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:06:50.721 02:51:29 -- common/autotest_common.sh@859 -- # break 00:06:50.721 02:51:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:50.721 02:51:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:50.721 02:51:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.721 1+0 records in 00:06:50.721 1+0 records out 00:06:50.721 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000300435 s, 13.6 MB/s 00:06:50.721 02:51:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.721 02:51:29 -- common/autotest_common.sh@872 -- # size=4096 00:06:50.721 02:51:29 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.721 02:51:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:50.721 02:51:29 -- common/autotest_common.sh@875 -- # return 0 00:06:50.721 02:51:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.721 02:51:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.721 02:51:29 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:50.980 /dev/nbd1 00:06:50.980 02:51:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:50.980 02:51:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:50.980 02:51:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:06:50.980 02:51:29 -- common/autotest_common.sh@855 -- # local i 00:06:50.980 02:51:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:06:50.980 02:51:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:06:50.980 02:51:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:06:50.980 02:51:29 -- common/autotest_common.sh@859 -- # break 00:06:50.980 02:51:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:06:50.980 02:51:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:06:50.980 02:51:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:50.980 1+0 records in 00:06:50.980 1+0 records out 00:06:50.980 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291547 s, 14.0 MB/s 00:06:50.980 02:51:29 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.980 02:51:30 -- common/autotest_common.sh@872 -- # size=4096 00:06:50.980 02:51:30 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:50.980 02:51:30 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:06:50.980 02:51:30 -- common/autotest_common.sh@875 -- # return 0 00:06:50.980 02:51:30 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.980 02:51:30 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:50.980 02:51:30 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.980 02:51:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.980 02:51:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:51.239 { 00:06:51.239 "nbd_device": "/dev/nbd0", 00:06:51.239 "bdev_name": "Malloc0" 00:06:51.239 }, 00:06:51.239 { 00:06:51.239 "nbd_device": "/dev/nbd1", 00:06:51.239 "bdev_name": "Malloc1" 00:06:51.239 } 00:06:51.239 ]' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:51.239 { 00:06:51.239 "nbd_device": "/dev/nbd0", 00:06:51.239 "bdev_name": "Malloc0" 00:06:51.239 }, 00:06:51.239 { 00:06:51.239 "nbd_device": "/dev/nbd1", 00:06:51.239 "bdev_name": "Malloc1" 00:06:51.239 } 00:06:51.239 ]' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:51.239 /dev/nbd1' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:51.239 /dev/nbd1' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@65 -- # count=2 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@95 -- # count=2 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:51.239 256+0 records in 00:06:51.239 256+0 records out 00:06:51.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00925555 s, 113 MB/s 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:51.239 256+0 records in 00:06:51.239 256+0 records out 00:06:51.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02249 s, 46.6 MB/s 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:51.239 256+0 records in 00:06:51.239 256+0 records out 00:06:51.239 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250881 s, 41.8 MB/s 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.239 02:51:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@51 -- # local i 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.499 02:51:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@41 -- # break 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.758 02:51:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@41 -- # break 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.017 02:51:30 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@65 -- # true 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.275 02:51:31 -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.276 02:51:31 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:52.535 02:51:31 -- event/event.sh@35 -- # sleep 3 00:06:52.535 [2024-04-23 02:51:31.646822] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.535 [2024-04-23 02:51:31.676311] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.535 [2024-04-23 02:51:31.676323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.793 [2024-04-23 02:51:31.705619] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:52.793 [2024-04-23 02:51:31.705688] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:56.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:56.083 02:51:34 -- event/event.sh@38 -- # waitforlisten 73608 /var/tmp/spdk-nbd.sock 00:06:56.083 02:51:34 -- common/autotest_common.sh@817 -- # '[' -z 73608 ']' 00:06:56.083 02:51:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:56.083 02:51:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.083 02:51:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:56.083 02:51:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.083 02:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 02:51:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.083 02:51:34 -- common/autotest_common.sh@850 -- # return 0 00:06:56.083 02:51:34 -- event/event.sh@39 -- # killprocess 73608 00:06:56.083 02:51:34 -- common/autotest_common.sh@936 -- # '[' -z 73608 ']' 00:06:56.083 02:51:34 -- common/autotest_common.sh@940 -- # kill -0 73608 00:06:56.083 02:51:34 -- common/autotest_common.sh@941 -- # uname 00:06:56.083 02:51:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:56.083 02:51:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73608 00:06:56.083 02:51:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:56.083 killing process with pid 73608 00:06:56.083 02:51:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:56.083 02:51:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73608' 00:06:56.083 02:51:34 -- common/autotest_common.sh@955 -- # kill 73608 00:06:56.083 02:51:34 -- common/autotest_common.sh@960 -- # wait 73608 00:06:56.083 spdk_app_start is called in Round 0. 00:06:56.083 Shutdown signal received, stop current app iteration 00:06:56.083 Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 reinitialization... 00:06:56.083 spdk_app_start is called in Round 1. 00:06:56.083 Shutdown signal received, stop current app iteration 00:06:56.083 Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 reinitialization... 00:06:56.083 spdk_app_start is called in Round 2. 00:06:56.083 Shutdown signal received, stop current app iteration 00:06:56.083 Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 reinitialization... 00:06:56.083 spdk_app_start is called in Round 3. 00:06:56.083 Shutdown signal received, stop current app iteration 00:06:56.083 ************************************ 00:06:56.083 END TEST app_repeat 00:06:56.083 ************************************ 00:06:56.083 02:51:34 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:56.083 02:51:34 -- event/event.sh@42 -- # return 0 00:06:56.083 00:06:56.083 real 0m17.815s 00:06:56.083 user 0m40.479s 00:06:56.083 sys 0m2.492s 00:06:56.083 02:51:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:56.083 02:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 02:51:34 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:56.083 02:51:34 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.083 02:51:34 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.083 02:51:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.083 02:51:34 -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 ************************************ 00:06:56.083 START TEST cpu_locks 00:06:56.083 ************************************ 00:06:56.083 02:51:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:56.083 * Looking for test storage... 00:06:56.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:56.083 02:51:35 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:56.083 02:51:35 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:56.083 02:51:35 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:56.083 02:51:35 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:56.083 02:51:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:56.083 02:51:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:56.083 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.083 ************************************ 00:06:56.083 START TEST default_locks 00:06:56.083 ************************************ 00:06:56.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.083 02:51:35 -- common/autotest_common.sh@1111 -- # default_locks 00:06:56.083 02:51:35 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=74039 00:06:56.083 02:51:35 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:56.083 02:51:35 -- event/cpu_locks.sh@47 -- # waitforlisten 74039 00:06:56.083 02:51:35 -- common/autotest_common.sh@817 -- # '[' -z 74039 ']' 00:06:56.083 02:51:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.083 02:51:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:56.083 02:51:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.083 02:51:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:56.083 02:51:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.343 [2024-04-23 02:51:35.254945] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:56.343 [2024-04-23 02:51:35.255234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74039 ] 00:06:56.343 [2024-04-23 02:51:35.370946] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:56.343 [2024-04-23 02:51:35.388997] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.343 [2024-04-23 02:51:35.421494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.601 02:51:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:56.601 02:51:35 -- common/autotest_common.sh@850 -- # return 0 00:06:56.601 02:51:35 -- event/cpu_locks.sh@49 -- # locks_exist 74039 00:06:56.601 02:51:35 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.601 02:51:35 -- event/cpu_locks.sh@22 -- # lslocks -p 74039 00:06:57.169 02:51:36 -- event/cpu_locks.sh@50 -- # killprocess 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@936 -- # '[' -z 74039 ']' 00:06:57.169 02:51:36 -- common/autotest_common.sh@940 -- # kill -0 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@941 -- # uname 00:06:57.169 02:51:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:57.169 02:51:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74039 00:06:57.169 killing process with pid 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:57.169 02:51:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:57.169 02:51:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74039' 00:06:57.169 02:51:36 -- common/autotest_common.sh@955 -- # kill 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@960 -- # wait 74039 00:06:57.169 02:51:36 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@638 -- # local es=0 00:06:57.169 02:51:36 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 74039 00:06:57.169 02:51:36 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:06:57.169 02:51:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.169 02:51:36 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:06:57.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.169 02:51:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:57.169 02:51:36 -- common/autotest_common.sh@641 -- # waitforlisten 74039 00:06:57.170 02:51:36 -- common/autotest_common.sh@817 -- # '[' -z 74039 ']' 00:06:57.170 02:51:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.170 02:51:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:57.170 02:51:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.170 02:51:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:57.170 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.170 ERROR: process (pid: 74039) is no longer running 00:06:57.170 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (74039) - No such process 00:06:57.170 02:51:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:57.170 02:51:36 -- common/autotest_common.sh@850 -- # return 1 00:06:57.170 02:51:36 -- common/autotest_common.sh@641 -- # es=1 00:06:57.170 02:51:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:57.170 02:51:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:57.170 02:51:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:57.170 02:51:36 -- event/cpu_locks.sh@54 -- # no_locks 00:06:57.170 02:51:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:57.170 02:51:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:57.170 02:51:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:57.170 00:06:57.170 real 0m1.113s 00:06:57.170 user 0m1.155s 00:06:57.170 sys 0m0.422s 00:06:57.170 02:51:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:57.170 ************************************ 00:06:57.170 END TEST default_locks 00:06:57.170 ************************************ 00:06:57.170 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.429 02:51:36 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:57.429 02:51:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:57.429 02:51:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:57.429 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.429 ************************************ 00:06:57.429 START TEST default_locks_via_rpc 00:06:57.429 ************************************ 00:06:57.429 02:51:36 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:06:57.429 02:51:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=74082 00:06:57.429 02:51:36 -- event/cpu_locks.sh@63 -- # waitforlisten 74082 00:06:57.429 02:51:36 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:57.429 02:51:36 -- common/autotest_common.sh@817 -- # '[' -z 74082 ']' 00:06:57.429 02:51:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.429 02:51:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:57.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.429 02:51:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.429 02:51:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:57.429 02:51:36 -- common/autotest_common.sh@10 -- # set +x 00:06:57.429 [2024-04-23 02:51:36.499444] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:57.429 [2024-04-23 02:51:36.499812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74082 ] 00:06:57.702 [2024-04-23 02:51:36.624715] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:57.702 [2024-04-23 02:51:36.636625] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.702 [2024-04-23 02:51:36.669676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.286 02:51:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:58.286 02:51:37 -- common/autotest_common.sh@850 -- # return 0 00:06:58.286 02:51:37 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:58.286 02:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.286 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.286 02:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.286 02:51:37 -- event/cpu_locks.sh@67 -- # no_locks 00:06:58.286 02:51:37 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:58.286 02:51:37 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:58.286 02:51:37 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:58.286 02:51:37 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:58.286 02:51:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:58.286 02:51:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.545 02:51:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:58.545 02:51:37 -- event/cpu_locks.sh@71 -- # locks_exist 74082 00:06:58.545 02:51:37 -- event/cpu_locks.sh@22 -- # lslocks -p 74082 00:06:58.545 02:51:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:58.804 02:51:37 -- event/cpu_locks.sh@73 -- # killprocess 74082 00:06:58.804 02:51:37 -- common/autotest_common.sh@936 -- # '[' -z 74082 ']' 00:06:58.804 02:51:37 -- common/autotest_common.sh@940 -- # kill -0 74082 00:06:58.804 02:51:37 -- common/autotest_common.sh@941 -- # uname 00:06:58.804 02:51:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:58.804 02:51:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74082 00:06:58.804 killing process with pid 74082 00:06:58.804 02:51:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:58.804 02:51:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:58.804 02:51:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74082' 00:06:58.804 02:51:37 -- common/autotest_common.sh@955 -- # kill 74082 00:06:58.804 02:51:37 -- common/autotest_common.sh@960 -- # wait 74082 00:06:59.063 ************************************ 00:06:59.063 END TEST default_locks_via_rpc 00:06:59.063 ************************************ 00:06:59.063 00:06:59.063 real 0m1.587s 00:06:59.063 user 0m1.785s 00:06:59.063 sys 0m0.404s 00:06:59.063 02:51:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:59.063 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.063 02:51:38 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:59.063 02:51:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:59.063 02:51:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.063 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.063 ************************************ 00:06:59.063 START TEST non_locking_app_on_locked_coremask 00:06:59.063 ************************************ 00:06:59.063 02:51:38 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:06:59.063 02:51:38 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=74137 00:06:59.063 02:51:38 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:59.063 02:51:38 -- event/cpu_locks.sh@81 -- # waitforlisten 74137 /var/tmp/spdk.sock 00:06:59.063 02:51:38 -- common/autotest_common.sh@817 -- # '[' -z 74137 ']' 00:06:59.063 02:51:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.063 02:51:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:59.063 02:51:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.063 02:51:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:59.063 02:51:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.064 [2024-04-23 02:51:38.200843] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:06:59.064 [2024-04-23 02:51:38.200964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74137 ] 00:06:59.323 [2024-04-23 02:51:38.322691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:06:59.323 [2024-04-23 02:51:38.341920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.323 [2024-04-23 02:51:38.377569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.261 02:51:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:00.261 02:51:39 -- common/autotest_common.sh@850 -- # return 0 00:07:00.261 02:51:39 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=74153 00:07:00.261 02:51:39 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:00.261 02:51:39 -- event/cpu_locks.sh@85 -- # waitforlisten 74153 /var/tmp/spdk2.sock 00:07:00.261 02:51:39 -- common/autotest_common.sh@817 -- # '[' -z 74153 ']' 00:07:00.261 02:51:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.261 02:51:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:00.261 02:51:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.261 02:51:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:00.261 02:51:39 -- common/autotest_common.sh@10 -- # set +x 00:07:00.261 [2024-04-23 02:51:39.174210] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:00.261 [2024-04-23 02:51:39.174542] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74153 ] 00:07:00.261 [2024-04-23 02:51:39.298821] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:00.261 [2024-04-23 02:51:39.317787] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:00.261 [2024-04-23 02:51:39.317872] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.261 [2024-04-23 02:51:39.381609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.197 02:51:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:01.197 02:51:40 -- common/autotest_common.sh@850 -- # return 0 00:07:01.197 02:51:40 -- event/cpu_locks.sh@87 -- # locks_exist 74137 00:07:01.197 02:51:40 -- event/cpu_locks.sh@22 -- # lslocks -p 74137 00:07:01.197 02:51:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:02.134 02:51:40 -- event/cpu_locks.sh@89 -- # killprocess 74137 00:07:02.134 02:51:40 -- common/autotest_common.sh@936 -- # '[' -z 74137 ']' 00:07:02.134 02:51:40 -- common/autotest_common.sh@940 -- # kill -0 74137 00:07:02.134 02:51:40 -- common/autotest_common.sh@941 -- # uname 00:07:02.134 02:51:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.134 02:51:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74137 00:07:02.134 killing process with pid 74137 00:07:02.134 02:51:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.134 02:51:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.134 02:51:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74137' 00:07:02.134 02:51:40 -- common/autotest_common.sh@955 -- # kill 74137 00:07:02.134 02:51:40 -- common/autotest_common.sh@960 -- # wait 74137 00:07:02.393 02:51:41 -- event/cpu_locks.sh@90 -- # killprocess 74153 00:07:02.393 02:51:41 -- common/autotest_common.sh@936 -- # '[' -z 74153 ']' 00:07:02.393 02:51:41 -- common/autotest_common.sh@940 -- # kill -0 74153 00:07:02.393 02:51:41 -- common/autotest_common.sh@941 -- # uname 00:07:02.393 02:51:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:02.393 02:51:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74153 00:07:02.393 02:51:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:02.393 02:51:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:02.393 killing process with pid 74153 00:07:02.393 02:51:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74153' 00:07:02.393 02:51:41 -- common/autotest_common.sh@955 -- # kill 74153 00:07:02.393 02:51:41 -- common/autotest_common.sh@960 -- # wait 74153 00:07:02.652 ************************************ 00:07:02.652 END TEST non_locking_app_on_locked_coremask 00:07:02.652 ************************************ 00:07:02.652 00:07:02.652 real 0m3.530s 00:07:02.652 user 0m4.114s 00:07:02.652 sys 0m0.911s 00:07:02.652 02:51:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:02.652 02:51:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.652 02:51:41 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:02.652 02:51:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:02.652 02:51:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:02.652 02:51:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.652 ************************************ 00:07:02.652 START TEST locking_app_on_unlocked_coremask 00:07:02.652 ************************************ 00:07:02.652 02:51:41 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:07:02.652 02:51:41 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=74214 00:07:02.652 02:51:41 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:02.652 02:51:41 -- event/cpu_locks.sh@99 -- # waitforlisten 74214 /var/tmp/spdk.sock 00:07:02.652 02:51:41 -- common/autotest_common.sh@817 -- # '[' -z 74214 ']' 00:07:02.652 02:51:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.652 02:51:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:02.652 02:51:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.652 02:51:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:02.652 02:51:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.912 [2024-04-23 02:51:41.843389] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:02.912 [2024-04-23 02:51:41.843484] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74214 ] 00:07:02.912 [2024-04-23 02:51:41.964308] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:02.912 [2024-04-23 02:51:41.982073] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.912 [2024-04-23 02:51:41.982105] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.912 [2024-04-23 02:51:42.018130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.171 02:51:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:03.171 02:51:42 -- common/autotest_common.sh@850 -- # return 0 00:07:03.171 02:51:42 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=74228 00:07:03.171 02:51:42 -- event/cpu_locks.sh@103 -- # waitforlisten 74228 /var/tmp/spdk2.sock 00:07:03.171 02:51:42 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.171 02:51:42 -- common/autotest_common.sh@817 -- # '[' -z 74228 ']' 00:07:03.171 02:51:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.171 02:51:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:03.171 02:51:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.171 02:51:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:03.171 02:51:42 -- common/autotest_common.sh@10 -- # set +x 00:07:03.171 [2024-04-23 02:51:42.238790] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:03.171 [2024-04-23 02:51:42.239205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74228 ] 00:07:03.429 [2024-04-23 02:51:42.362751] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:03.429 [2024-04-23 02:51:42.387203] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.429 [2024-04-23 02:51:42.459587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.365 02:51:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:04.365 02:51:43 -- common/autotest_common.sh@850 -- # return 0 00:07:04.365 02:51:43 -- event/cpu_locks.sh@105 -- # locks_exist 74228 00:07:04.365 02:51:43 -- event/cpu_locks.sh@22 -- # lslocks -p 74228 00:07:04.365 02:51:43 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.931 02:51:44 -- event/cpu_locks.sh@107 -- # killprocess 74214 00:07:04.931 02:51:44 -- common/autotest_common.sh@936 -- # '[' -z 74214 ']' 00:07:04.931 02:51:44 -- common/autotest_common.sh@940 -- # kill -0 74214 00:07:04.931 02:51:44 -- common/autotest_common.sh@941 -- # uname 00:07:04.931 02:51:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:04.931 02:51:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74214 00:07:04.931 02:51:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:04.931 02:51:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:04.931 killing process with pid 74214 00:07:04.931 02:51:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74214' 00:07:04.931 02:51:44 -- common/autotest_common.sh@955 -- # kill 74214 00:07:04.931 02:51:44 -- common/autotest_common.sh@960 -- # wait 74214 00:07:05.520 02:51:44 -- event/cpu_locks.sh@108 -- # killprocess 74228 00:07:05.520 02:51:44 -- common/autotest_common.sh@936 -- # '[' -z 74228 ']' 00:07:05.520 02:51:44 -- common/autotest_common.sh@940 -- # kill -0 74228 00:07:05.520 02:51:44 -- common/autotest_common.sh@941 -- # uname 00:07:05.520 02:51:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:05.520 02:51:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74228 00:07:05.520 killing process with pid 74228 00:07:05.520 02:51:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:05.520 02:51:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:05.521 02:51:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74228' 00:07:05.521 02:51:44 -- common/autotest_common.sh@955 -- # kill 74228 00:07:05.521 02:51:44 -- common/autotest_common.sh@960 -- # wait 74228 00:07:05.783 00:07:05.783 real 0m2.962s 00:07:05.783 user 0m3.394s 00:07:05.783 sys 0m0.894s 00:07:05.783 02:51:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:05.783 02:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.783 ************************************ 00:07:05.783 END TEST locking_app_on_unlocked_coremask 00:07:05.783 ************************************ 00:07:05.783 02:51:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:05.784 02:51:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:05.784 02:51:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:05.784 02:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.784 ************************************ 00:07:05.784 START TEST locking_app_on_locked_coremask 00:07:05.784 ************************************ 00:07:05.784 02:51:44 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:07:05.784 02:51:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=74289 00:07:05.784 02:51:44 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:05.784 02:51:44 -- event/cpu_locks.sh@116 -- # waitforlisten 74289 /var/tmp/spdk.sock 00:07:05.784 02:51:44 -- common/autotest_common.sh@817 -- # '[' -z 74289 ']' 00:07:05.784 02:51:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.784 02:51:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:05.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.784 02:51:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.784 02:51:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:05.784 02:51:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.784 [2024-04-23 02:51:44.926834] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:05.784 [2024-04-23 02:51:44.926910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74289 ] 00:07:06.042 [2024-04-23 02:51:45.042742] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.042 [2024-04-23 02:51:45.058118] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.042 [2024-04-23 02:51:45.093107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.300 02:51:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:06.300 02:51:45 -- common/autotest_common.sh@850 -- # return 0 00:07:06.300 02:51:45 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=74303 00:07:06.300 02:51:45 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 74303 /var/tmp/spdk2.sock 00:07:06.300 02:51:45 -- common/autotest_common.sh@638 -- # local es=0 00:07:06.300 02:51:45 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:06.300 02:51:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 74303 /var/tmp/spdk2.sock 00:07:06.300 02:51:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:06.300 02:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:06.300 02:51:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:06.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.300 02:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:06.300 02:51:45 -- common/autotest_common.sh@641 -- # waitforlisten 74303 /var/tmp/spdk2.sock 00:07:06.300 02:51:45 -- common/autotest_common.sh@817 -- # '[' -z 74303 ']' 00:07:06.300 02:51:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.300 02:51:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:06.300 02:51:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.300 02:51:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:06.300 02:51:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.300 [2024-04-23 02:51:45.306796] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:06.300 [2024-04-23 02:51:45.306895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74303 ] 00:07:06.300 [2024-04-23 02:51:45.431526] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:06.300 [2024-04-23 02:51:45.445914] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 74289 has claimed it. 00:07:06.300 [2024-04-23 02:51:45.445994] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:07.249 ERROR: process (pid: 74303) is no longer running 00:07:07.249 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (74303) - No such process 00:07:07.249 02:51:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:07.249 02:51:46 -- common/autotest_common.sh@850 -- # return 1 00:07:07.249 02:51:46 -- common/autotest_common.sh@641 -- # es=1 00:07:07.249 02:51:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:07.249 02:51:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:07.249 02:51:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:07.249 02:51:46 -- event/cpu_locks.sh@122 -- # locks_exist 74289 00:07:07.249 02:51:46 -- event/cpu_locks.sh@22 -- # lslocks -p 74289 00:07:07.249 02:51:46 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.508 02:51:46 -- event/cpu_locks.sh@124 -- # killprocess 74289 00:07:07.508 02:51:46 -- common/autotest_common.sh@936 -- # '[' -z 74289 ']' 00:07:07.508 02:51:46 -- common/autotest_common.sh@940 -- # kill -0 74289 00:07:07.508 02:51:46 -- common/autotest_common.sh@941 -- # uname 00:07:07.508 02:51:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:07.508 02:51:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74289 00:07:07.508 killing process with pid 74289 00:07:07.508 02:51:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:07.508 02:51:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:07.508 02:51:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74289' 00:07:07.508 02:51:46 -- common/autotest_common.sh@955 -- # kill 74289 00:07:07.508 02:51:46 -- common/autotest_common.sh@960 -- # wait 74289 00:07:07.767 00:07:07.767 real 0m1.872s 00:07:07.767 user 0m2.251s 00:07:07.767 sys 0m0.478s 00:07:07.767 ************************************ 00:07:07.767 END TEST locking_app_on_locked_coremask 00:07:07.767 ************************************ 00:07:07.767 02:51:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:07.767 02:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.767 02:51:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.767 02:51:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:07.767 02:51:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.767 02:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:07.767 ************************************ 00:07:07.767 START TEST locking_overlapped_coremask 00:07:07.767 ************************************ 00:07:07.767 02:51:46 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:07:07.767 02:51:46 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:07.767 02:51:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=74347 00:07:07.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.767 02:51:46 -- event/cpu_locks.sh@133 -- # waitforlisten 74347 /var/tmp/spdk.sock 00:07:07.767 02:51:46 -- common/autotest_common.sh@817 -- # '[' -z 74347 ']' 00:07:07.767 02:51:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.767 02:51:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:07.767 02:51:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.767 02:51:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:07.767 02:51:46 -- common/autotest_common.sh@10 -- # set +x 00:07:08.026 [2024-04-23 02:51:46.924693] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:08.026 [2024-04-23 02:51:46.925360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74347 ] 00:07:08.026 [2024-04-23 02:51:47.054322] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.026 [2024-04-23 02:51:47.072178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.026 [2024-04-23 02:51:47.107036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.026 [2024-04-23 02:51:47.107076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.026 [2024-04-23 02:51:47.107080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.962 02:51:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:08.962 02:51:47 -- common/autotest_common.sh@850 -- # return 0 00:07:08.962 02:51:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=74365 00:07:08.962 02:51:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 74365 /var/tmp/spdk2.sock 00:07:08.962 02:51:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:08.962 02:51:47 -- common/autotest_common.sh@638 -- # local es=0 00:07:08.962 02:51:47 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 74365 /var/tmp/spdk2.sock 00:07:08.962 02:51:47 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:07:08.962 02:51:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.962 02:51:47 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:07:08.962 02:51:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:08.962 02:51:47 -- common/autotest_common.sh@641 -- # waitforlisten 74365 /var/tmp/spdk2.sock 00:07:08.962 02:51:47 -- common/autotest_common.sh@817 -- # '[' -z 74365 ']' 00:07:08.962 02:51:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:08.962 02:51:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:08.962 02:51:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:08.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:08.962 02:51:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:08.962 02:51:47 -- common/autotest_common.sh@10 -- # set +x 00:07:08.962 [2024-04-23 02:51:47.882076] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:08.962 [2024-04-23 02:51:47.882856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74365 ] 00:07:08.962 [2024-04-23 02:51:48.006897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:08.962 [2024-04-23 02:51:48.023823] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74347 has claimed it. 00:07:08.962 [2024-04-23 02:51:48.023905] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:09.530 ERROR: process (pid: 74365) is no longer running 00:07:09.530 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (74365) - No such process 00:07:09.530 02:51:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:09.530 02:51:48 -- common/autotest_common.sh@850 -- # return 1 00:07:09.530 02:51:48 -- common/autotest_common.sh@641 -- # es=1 00:07:09.530 02:51:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:09.530 02:51:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:09.530 02:51:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:09.530 02:51:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:09.530 02:51:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:09.530 02:51:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:09.530 02:51:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:09.530 02:51:48 -- event/cpu_locks.sh@141 -- # killprocess 74347 00:07:09.530 02:51:48 -- common/autotest_common.sh@936 -- # '[' -z 74347 ']' 00:07:09.530 02:51:48 -- common/autotest_common.sh@940 -- # kill -0 74347 00:07:09.530 02:51:48 -- common/autotest_common.sh@941 -- # uname 00:07:09.530 02:51:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.530 02:51:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74347 00:07:09.530 killing process with pid 74347 00:07:09.530 02:51:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:09.530 02:51:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:09.530 02:51:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74347' 00:07:09.530 02:51:48 -- common/autotest_common.sh@955 -- # kill 74347 00:07:09.530 02:51:48 -- common/autotest_common.sh@960 -- # wait 74347 00:07:09.789 00:07:09.790 real 0m1.954s 00:07:09.790 user 0m5.542s 00:07:09.790 sys 0m0.338s 00:07:09.790 02:51:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.790 02:51:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.790 ************************************ 00:07:09.790 END TEST locking_overlapped_coremask 00:07:09.790 ************************************ 00:07:09.790 02:51:48 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.790 02:51:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:09.790 02:51:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.790 02:51:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.790 ************************************ 00:07:09.790 START TEST locking_overlapped_coremask_via_rpc 00:07:09.790 ************************************ 00:07:09.790 02:51:48 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:07:09.790 02:51:48 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=74409 00:07:09.790 02:51:48 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.790 02:51:48 -- event/cpu_locks.sh@149 -- # waitforlisten 74409 /var/tmp/spdk.sock 00:07:09.790 02:51:48 -- common/autotest_common.sh@817 -- # '[' -z 74409 ']' 00:07:09.790 02:51:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.790 02:51:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:09.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.790 02:51:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.790 02:51:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:09.790 02:51:48 -- common/autotest_common.sh@10 -- # set +x 00:07:10.049 [2024-04-23 02:51:48.976359] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:10.049 [2024-04-23 02:51:48.976449] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74409 ] 00:07:10.049 [2024-04-23 02:51:49.094182] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.049 [2024-04-23 02:51:49.109734] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.049 [2024-04-23 02:51:49.109906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.049 [2024-04-23 02:51:49.142750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.049 [2024-04-23 02:51:49.142861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.049 [2024-04-23 02:51:49.142866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.307 02:51:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:10.307 02:51:49 -- common/autotest_common.sh@850 -- # return 0 00:07:10.307 02:51:49 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=74421 00:07:10.307 02:51:49 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.307 02:51:49 -- event/cpu_locks.sh@153 -- # waitforlisten 74421 /var/tmp/spdk2.sock 00:07:10.307 02:51:49 -- common/autotest_common.sh@817 -- # '[' -z 74421 ']' 00:07:10.307 02:51:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.307 02:51:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:10.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.307 02:51:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.307 02:51:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:10.307 02:51:49 -- common/autotest_common.sh@10 -- # set +x 00:07:10.307 [2024-04-23 02:51:49.365875] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:10.307 [2024-04-23 02:51:49.366198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74421 ] 00:07:10.565 [2024-04-23 02:51:49.490903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:10.565 [2024-04-23 02:51:49.512551] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.565 [2024-04-23 02:51:49.512596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.565 [2024-04-23 02:51:49.577046] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.565 [2024-04-23 02:51:49.580262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.565 [2024-04-23 02:51:49.580261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.501 02:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.501 02:51:50 -- common/autotest_common.sh@850 -- # return 0 00:07:11.501 02:51:50 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.501 02:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.501 02:51:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.501 02:51:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:11.501 02:51:50 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.501 02:51:50 -- common/autotest_common.sh@638 -- # local es=0 00:07:11.501 02:51:50 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.501 02:51:50 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:07:11.501 02:51:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.501 02:51:50 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:07:11.501 02:51:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:11.501 02:51:50 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.501 02:51:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:11.501 02:51:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.501 [2024-04-23 02:51:50.358284] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 74409 has claimed it. 00:07:11.501 request: 00:07:11.501 { 00:07:11.501 "method": "framework_enable_cpumask_locks", 00:07:11.501 "req_id": 1 00:07:11.501 } 00:07:11.501 Got JSON-RPC error response 00:07:11.501 response: 00:07:11.502 { 00:07:11.502 "code": -32603, 00:07:11.502 "message": "Failed to claim CPU core: 2" 00:07:11.502 } 00:07:11.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.502 02:51:50 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:07:11.502 02:51:50 -- common/autotest_common.sh@641 -- # es=1 00:07:11.502 02:51:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:11.502 02:51:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:11.502 02:51:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:11.502 02:51:50 -- event/cpu_locks.sh@158 -- # waitforlisten 74409 /var/tmp/spdk.sock 00:07:11.502 02:51:50 -- common/autotest_common.sh@817 -- # '[' -z 74409 ']' 00:07:11.502 02:51:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.502 02:51:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.502 02:51:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.502 02:51:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.502 02:51:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.502 02:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.502 02:51:50 -- common/autotest_common.sh@850 -- # return 0 00:07:11.502 02:51:50 -- event/cpu_locks.sh@159 -- # waitforlisten 74421 /var/tmp/spdk2.sock 00:07:11.502 02:51:50 -- common/autotest_common.sh@817 -- # '[' -z 74421 ']' 00:07:11.502 02:51:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.502 02:51:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:11.502 02:51:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.502 02:51:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:11.502 02:51:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.761 ************************************ 00:07:11.761 END TEST locking_overlapped_coremask_via_rpc 00:07:11.761 ************************************ 00:07:11.761 02:51:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:11.761 02:51:50 -- common/autotest_common.sh@850 -- # return 0 00:07:11.761 02:51:50 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:11.761 02:51:50 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.761 02:51:50 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.761 02:51:50 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.761 00:07:11.761 real 0m1.961s 00:07:11.761 user 0m1.146s 00:07:11.761 sys 0m0.171s 00:07:11.761 02:51:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:11.761 02:51:50 -- common/autotest_common.sh@10 -- # set +x 00:07:12.020 02:51:50 -- event/cpu_locks.sh@174 -- # cleanup 00:07:12.020 02:51:50 -- event/cpu_locks.sh@15 -- # [[ -z 74409 ]] 00:07:12.020 02:51:50 -- event/cpu_locks.sh@15 -- # killprocess 74409 00:07:12.020 02:51:50 -- common/autotest_common.sh@936 -- # '[' -z 74409 ']' 00:07:12.020 02:51:50 -- common/autotest_common.sh@940 -- # kill -0 74409 00:07:12.020 02:51:50 -- common/autotest_common.sh@941 -- # uname 00:07:12.020 02:51:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.020 02:51:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74409 00:07:12.020 killing process with pid 74409 00:07:12.020 02:51:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:12.020 02:51:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:12.020 02:51:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74409' 00:07:12.020 02:51:50 -- common/autotest_common.sh@955 -- # kill 74409 00:07:12.020 02:51:50 -- common/autotest_common.sh@960 -- # wait 74409 00:07:12.278 02:51:51 -- event/cpu_locks.sh@16 -- # [[ -z 74421 ]] 00:07:12.278 02:51:51 -- event/cpu_locks.sh@16 -- # killprocess 74421 00:07:12.278 02:51:51 -- common/autotest_common.sh@936 -- # '[' -z 74421 ']' 00:07:12.278 02:51:51 -- common/autotest_common.sh@940 -- # kill -0 74421 00:07:12.278 02:51:51 -- common/autotest_common.sh@941 -- # uname 00:07:12.278 02:51:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:12.278 02:51:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74421 00:07:12.278 killing process with pid 74421 00:07:12.278 02:51:51 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:12.278 02:51:51 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:12.278 02:51:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74421' 00:07:12.278 02:51:51 -- common/autotest_common.sh@955 -- # kill 74421 00:07:12.278 02:51:51 -- common/autotest_common.sh@960 -- # wait 74421 00:07:12.541 02:51:51 -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.541 Process with pid 74409 is not found 00:07:12.541 02:51:51 -- event/cpu_locks.sh@1 -- # cleanup 00:07:12.541 02:51:51 -- event/cpu_locks.sh@15 -- # [[ -z 74409 ]] 00:07:12.541 02:51:51 -- event/cpu_locks.sh@15 -- # killprocess 74409 00:07:12.541 02:51:51 -- common/autotest_common.sh@936 -- # '[' -z 74409 ']' 00:07:12.541 02:51:51 -- common/autotest_common.sh@940 -- # kill -0 74409 00:07:12.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (74409) - No such process 00:07:12.541 02:51:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 74409 is not found' 00:07:12.541 02:51:51 -- event/cpu_locks.sh@16 -- # [[ -z 74421 ]] 00:07:12.541 02:51:51 -- event/cpu_locks.sh@16 -- # killprocess 74421 00:07:12.541 02:51:51 -- common/autotest_common.sh@936 -- # '[' -z 74421 ']' 00:07:12.541 02:51:51 -- common/autotest_common.sh@940 -- # kill -0 74421 00:07:12.541 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (74421) - No such process 00:07:12.541 Process with pid 74421 is not found 00:07:12.541 02:51:51 -- common/autotest_common.sh@963 -- # echo 'Process with pid 74421 is not found' 00:07:12.541 02:51:51 -- event/cpu_locks.sh@18 -- # rm -f 00:07:12.541 ************************************ 00:07:12.541 END TEST cpu_locks 00:07:12.541 ************************************ 00:07:12.541 00:07:12.541 real 0m16.398s 00:07:12.541 user 0m29.384s 00:07:12.541 sys 0m4.424s 00:07:12.541 02:51:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.541 02:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.541 ************************************ 00:07:12.541 END TEST event 00:07:12.541 ************************************ 00:07:12.541 00:07:12.541 real 0m42.485s 00:07:12.541 user 1m22.073s 00:07:12.541 sys 0m7.776s 00:07:12.541 02:51:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:12.541 02:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.541 02:51:51 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:12.541 02:51:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.541 02:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.541 02:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.541 ************************************ 00:07:12.541 START TEST thread 00:07:12.541 ************************************ 00:07:12.541 02:51:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:12.541 * Looking for test storage... 00:07:12.541 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:12.541 02:51:51 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.541 02:51:51 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:12.541 02:51:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.541 02:51:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.809 ************************************ 00:07:12.809 START TEST thread_poller_perf 00:07:12.809 ************************************ 00:07:12.809 02:51:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:12.809 [2024-04-23 02:51:51.767210] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:12.809 [2024-04-23 02:51:51.767289] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74552 ] 00:07:12.809 [2024-04-23 02:51:51.882583] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:12.809 [2024-04-23 02:51:51.901779] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.809 [2024-04-23 02:51:51.931048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.809 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:14.184 ====================================== 00:07:14.184 busy:2207886062 (cyc) 00:07:14.184 total_run_count: 381000 00:07:14.184 tsc_hz: 2200000000 (cyc) 00:07:14.184 ====================================== 00:07:14.184 poller_cost: 5794 (cyc), 2633 (nsec) 00:07:14.184 00:07:14.184 real 0m1.233s 00:07:14.184 user 0m1.095s 00:07:14.184 sys 0m0.032s 00:07:14.184 02:51:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:14.184 02:51:52 -- common/autotest_common.sh@10 -- # set +x 00:07:14.184 ************************************ 00:07:14.184 END TEST thread_poller_perf 00:07:14.184 ************************************ 00:07:14.184 02:51:53 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.184 02:51:53 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:14.184 02:51:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.184 02:51:53 -- common/autotest_common.sh@10 -- # set +x 00:07:14.184 ************************************ 00:07:14.184 START TEST thread_poller_perf 00:07:14.184 ************************************ 00:07:14.184 02:51:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:14.184 [2024-04-23 02:51:53.125879] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:14.184 [2024-04-23 02:51:53.126093] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74586 ] 00:07:14.184 [2024-04-23 02:51:53.241875] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:14.184 [2024-04-23 02:51:53.258981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.184 [2024-04-23 02:51:53.293541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.184 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:15.560 ====================================== 00:07:15.560 busy:2201974250 (cyc) 00:07:15.560 total_run_count: 4813000 00:07:15.560 tsc_hz: 2200000000 (cyc) 00:07:15.560 ====================================== 00:07:15.560 poller_cost: 457 (cyc), 207 (nsec) 00:07:15.560 00:07:15.560 real 0m1.238s 00:07:15.560 user 0m1.099s 00:07:15.560 sys 0m0.031s 00:07:15.560 02:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.560 02:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.560 ************************************ 00:07:15.560 END TEST thread_poller_perf 00:07:15.561 ************************************ 00:07:15.561 02:51:54 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:15.561 00:07:15.561 real 0m2.791s 00:07:15.561 user 0m2.308s 00:07:15.561 sys 0m0.229s 00:07:15.561 02:51:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:15.561 02:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.561 ************************************ 00:07:15.561 END TEST thread 00:07:15.561 ************************************ 00:07:15.561 02:51:54 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:15.561 02:51:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:15.561 02:51:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:15.561 02:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.561 ************************************ 00:07:15.561 START TEST accel 00:07:15.561 ************************************ 00:07:15.561 02:51:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:07:15.561 * Looking for test storage... 00:07:15.561 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:15.561 02:51:54 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:15.561 02:51:54 -- accel/accel.sh@82 -- # get_expected_opcs 00:07:15.561 02:51:54 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:15.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.561 02:51:54 -- accel/accel.sh@62 -- # spdk_tgt_pid=74671 00:07:15.561 02:51:54 -- accel/accel.sh@63 -- # waitforlisten 74671 00:07:15.561 02:51:54 -- common/autotest_common.sh@817 -- # '[' -z 74671 ']' 00:07:15.561 02:51:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.561 02:51:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:15.561 02:51:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.561 02:51:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:15.561 02:51:54 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:15.561 02:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.561 02:51:54 -- accel/accel.sh@61 -- # build_accel_config 00:07:15.561 02:51:54 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.561 02:51:54 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.561 02:51:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.561 02:51:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.561 02:51:54 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.561 02:51:54 -- accel/accel.sh@40 -- # local IFS=, 00:07:15.561 02:51:54 -- accel/accel.sh@41 -- # jq -r . 00:07:15.561 [2024-04-23 02:51:54.621025] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:15.561 [2024-04-23 02:51:54.621110] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74671 ] 00:07:15.820 [2024-04-23 02:51:54.737439] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:15.820 [2024-04-23 02:51:54.752390] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.820 [2024-04-23 02:51:54.785268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.820 02:51:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:15.820 02:51:54 -- common/autotest_common.sh@850 -- # return 0 00:07:15.820 02:51:54 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:15.820 02:51:54 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:15.820 02:51:54 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:15.820 02:51:54 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:15.820 02:51:54 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:15.820 02:51:54 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:15.820 02:51:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:15.820 02:51:54 -- common/autotest_common.sh@10 -- # set +x 00:07:15.820 02:51:54 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:15.820 02:51:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # IFS== 00:07:16.079 02:51:54 -- accel/accel.sh@72 -- # read -r opc module 00:07:16.079 02:51:54 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:16.079 02:51:54 -- accel/accel.sh@75 -- # killprocess 74671 00:07:16.079 02:51:54 -- common/autotest_common.sh@936 -- # '[' -z 74671 ']' 00:07:16.079 02:51:54 -- common/autotest_common.sh@940 -- # kill -0 74671 00:07:16.079 02:51:54 -- common/autotest_common.sh@941 -- # uname 00:07:16.079 02:51:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:16.079 02:51:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74671 00:07:16.079 killing process with pid 74671 00:07:16.079 02:51:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:16.079 02:51:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:16.079 02:51:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74671' 00:07:16.079 02:51:55 -- common/autotest_common.sh@955 -- # kill 74671 00:07:16.079 02:51:55 -- common/autotest_common.sh@960 -- # wait 74671 00:07:16.079 02:51:55 -- accel/accel.sh@76 -- # trap - ERR 00:07:16.079 02:51:55 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:16.079 02:51:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:16.079 02:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.079 02:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.338 02:51:55 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:07:16.338 02:51:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:16.338 02:51:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.338 02:51:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.338 02:51:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.338 02:51:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.338 02:51:55 -- accel/accel.sh@41 -- # jq -r . 00:07:16.338 02:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.338 02:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.338 02:51:55 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:16.338 02:51:55 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:16.338 02:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.338 02:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.338 ************************************ 00:07:16.338 START TEST accel_missing_filename 00:07:16.338 ************************************ 00:07:16.338 02:51:55 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:07:16.338 02:51:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:16.338 02:51:55 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:16.338 02:51:55 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:16.338 02:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.338 02:51:55 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:16.338 02:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.338 02:51:55 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:07:16.338 02:51:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.338 02:51:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:16.338 02:51:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.338 02:51:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.338 02:51:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.338 02:51:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.338 02:51:55 -- accel/accel.sh@41 -- # jq -r . 00:07:16.338 [2024-04-23 02:51:55.481067] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:16.338 [2024-04-23 02:51:55.481169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74718 ] 00:07:16.597 [2024-04-23 02:51:55.600609] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:16.597 [2024-04-23 02:51:55.616928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.597 [2024-04-23 02:51:55.648942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.597 [2024-04-23 02:51:55.677299] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:16.597 [2024-04-23 02:51:55.714546] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:16.856 A filename is required. 00:07:16.856 ************************************ 00:07:16.856 END TEST accel_missing_filename 00:07:16.856 ************************************ 00:07:16.856 02:51:55 -- common/autotest_common.sh@641 -- # es=234 00:07:16.856 02:51:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:16.856 02:51:55 -- common/autotest_common.sh@650 -- # es=106 00:07:16.856 02:51:55 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:16.856 02:51:55 -- common/autotest_common.sh@658 -- # es=1 00:07:16.856 02:51:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:16.856 00:07:16.856 real 0m0.311s 00:07:16.856 user 0m0.185s 00:07:16.857 sys 0m0.072s 00:07:16.857 02:51:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:16.857 02:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.857 02:51:55 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.857 02:51:55 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:16.857 02:51:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.857 02:51:55 -- common/autotest_common.sh@10 -- # set +x 00:07:16.857 ************************************ 00:07:16.857 START TEST accel_compress_verify 00:07:16.857 ************************************ 00:07:16.857 02:51:55 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.857 02:51:55 -- common/autotest_common.sh@638 -- # local es=0 00:07:16.857 02:51:55 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.857 02:51:55 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:16.857 02:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.857 02:51:55 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:16.857 02:51:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:16.857 02:51:55 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.857 02:51:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.857 02:51:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:16.857 02:51:55 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.857 02:51:55 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.857 02:51:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.857 02:51:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.857 02:51:55 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.857 02:51:55 -- accel/accel.sh@40 -- # local IFS=, 00:07:16.857 02:51:55 -- accel/accel.sh@41 -- # jq -r . 00:07:16.857 [2024-04-23 02:51:55.907911] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:16.857 [2024-04-23 02:51:55.907987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74746 ] 00:07:17.116 [2024-04-23 02:51:56.027910] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.116 [2024-04-23 02:51:56.046667] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.116 [2024-04-23 02:51:56.077659] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.116 [2024-04-23 02:51:56.105549] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:17.116 [2024-04-23 02:51:56.143268] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:07:17.116 00:07:17.116 Compression does not support the verify option, aborting. 00:07:17.116 02:51:56 -- common/autotest_common.sh@641 -- # es=161 00:07:17.116 02:51:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:17.116 02:51:56 -- common/autotest_common.sh@650 -- # es=33 00:07:17.116 ************************************ 00:07:17.116 END TEST accel_compress_verify 00:07:17.116 ************************************ 00:07:17.116 02:51:56 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:17.116 02:51:56 -- common/autotest_common.sh@658 -- # es=1 00:07:17.116 02:51:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:17.116 00:07:17.116 real 0m0.312s 00:07:17.116 user 0m0.185s 00:07:17.116 sys 0m0.071s 00:07:17.116 02:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.116 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.116 02:51:56 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:17.116 02:51:56 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:17.116 02:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.116 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.376 ************************************ 00:07:17.376 START TEST accel_wrong_workload 00:07:17.376 ************************************ 00:07:17.376 02:51:56 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:07:17.376 02:51:56 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.376 02:51:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:17.376 02:51:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:17.376 02:51:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.376 02:51:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:17.376 02:51:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.376 02:51:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:07:17.376 02:51:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:17.376 02:51:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.376 02:51:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.376 02:51:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.376 02:51:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.376 02:51:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.376 02:51:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.376 02:51:56 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.376 02:51:56 -- accel/accel.sh@41 -- # jq -r . 00:07:17.376 Unsupported workload type: foobar 00:07:17.376 [2024-04-23 02:51:56.331893] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:17.376 accel_perf options: 00:07:17.376 [-h help message] 00:07:17.376 [-q queue depth per core] 00:07:17.376 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:17.376 [-T number of threads per core 00:07:17.376 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:17.376 [-t time in seconds] 00:07:17.376 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:17.376 [ dif_verify, , dif_generate, dif_generate_copy 00:07:17.376 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:17.376 [-l for compress/decompress workloads, name of uncompressed input file 00:07:17.376 [-S for crc32c workload, use this seed value (default 0) 00:07:17.376 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:17.376 [-f for fill workload, use this BYTE value (default 255) 00:07:17.376 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:17.376 [-y verify result if this switch is on] 00:07:17.376 [-a tasks to allocate per core (default: same value as -q)] 00:07:17.376 Can be used to spread operations across a wider range of memory. 00:07:17.376 02:51:56 -- common/autotest_common.sh@641 -- # es=1 00:07:17.376 02:51:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:17.376 02:51:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:17.376 02:51:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:17.376 00:07:17.376 real 0m0.028s 00:07:17.376 user 0m0.014s 00:07:17.376 sys 0m0.013s 00:07:17.376 ************************************ 00:07:17.376 END TEST accel_wrong_workload 00:07:17.376 ************************************ 00:07:17.376 02:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.376 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.376 02:51:56 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:17.376 02:51:56 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:17.376 02:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.376 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.376 ************************************ 00:07:17.376 START TEST accel_negative_buffers 00:07:17.376 ************************************ 00:07:17.376 02:51:56 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:17.376 02:51:56 -- common/autotest_common.sh@638 -- # local es=0 00:07:17.377 02:51:56 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:17.377 02:51:56 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:07:17.377 02:51:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.377 02:51:56 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:07:17.377 02:51:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:17.377 02:51:56 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:07:17.377 02:51:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:17.377 02:51:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.377 02:51:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.377 02:51:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.377 02:51:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.377 02:51:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.377 02:51:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.377 02:51:56 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.377 02:51:56 -- accel/accel.sh@41 -- # jq -r . 00:07:17.377 -x option must be non-negative. 00:07:17.377 [2024-04-23 02:51:56.467276] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:17.377 accel_perf options: 00:07:17.377 [-h help message] 00:07:17.377 [-q queue depth per core] 00:07:17.377 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:17.377 [-T number of threads per core 00:07:17.377 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:17.377 [-t time in seconds] 00:07:17.377 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:17.377 [ dif_verify, , dif_generate, dif_generate_copy 00:07:17.377 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:17.377 [-l for compress/decompress workloads, name of uncompressed input file 00:07:17.377 [-S for crc32c workload, use this seed value (default 0) 00:07:17.377 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:17.377 [-f for fill workload, use this BYTE value (default 255) 00:07:17.377 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:17.377 [-y verify result if this switch is on] 00:07:17.377 [-a tasks to allocate per core (default: same value as -q)] 00:07:17.377 Can be used to spread operations across a wider range of memory. 00:07:17.377 02:51:56 -- common/autotest_common.sh@641 -- # es=1 00:07:17.377 02:51:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:17.377 02:51:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:17.377 02:51:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:17.377 00:07:17.377 real 0m0.027s 00:07:17.377 user 0m0.014s 00:07:17.377 sys 0m0.012s 00:07:17.377 02:51:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:17.377 ************************************ 00:07:17.377 END TEST accel_negative_buffers 00:07:17.377 ************************************ 00:07:17.377 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.377 02:51:56 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:17.377 02:51:56 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:17.377 02:51:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:17.377 02:51:56 -- common/autotest_common.sh@10 -- # set +x 00:07:17.636 ************************************ 00:07:17.636 START TEST accel_crc32c 00:07:17.636 ************************************ 00:07:17.636 02:51:56 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:17.636 02:51:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.636 02:51:56 -- accel/accel.sh@17 -- # local accel_module 00:07:17.636 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.636 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.636 02:51:56 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:17.636 02:51:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:17.636 02:51:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.636 02:51:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.636 02:51:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:17.636 02:51:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.636 02:51:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.636 02:51:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.636 02:51:56 -- accel/accel.sh@40 -- # local IFS=, 00:07:17.636 02:51:56 -- accel/accel.sh@41 -- # jq -r . 00:07:17.636 [2024-04-23 02:51:56.611933] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:17.636 [2024-04-23 02:51:56.611989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74819 ] 00:07:17.636 [2024-04-23 02:51:56.728213] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:17.636 [2024-04-23 02:51:56.745358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.636 [2024-04-23 02:51:56.778590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val=0x1 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val=crc32c 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.895 02:51:56 -- accel/accel.sh@20 -- # val=32 00:07:17.895 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.895 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val=software 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@22 -- # accel_module=software 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val=32 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val=32 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val=1 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val=Yes 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:17.896 02:51:56 -- accel/accel.sh@20 -- # val= 00:07:17.896 02:51:56 -- accel/accel.sh@21 -- # case "$var" in 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # IFS=: 00:07:17.896 02:51:56 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 ************************************ 00:07:18.833 END TEST accel_crc32c 00:07:18.833 ************************************ 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@20 -- # val= 00:07:18.833 02:51:57 -- accel/accel.sh@21 -- # case "$var" in 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # IFS=: 00:07:18.833 02:51:57 -- accel/accel.sh@19 -- # read -r var val 00:07:18.833 02:51:57 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.833 02:51:57 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:18.833 02:51:57 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.833 00:07:18.833 real 0m1.310s 00:07:18.833 user 0m1.154s 00:07:18.833 sys 0m0.064s 00:07:18.833 02:51:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:18.833 02:51:57 -- common/autotest_common.sh@10 -- # set +x 00:07:18.833 02:51:57 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:18.833 02:51:57 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:18.833 02:51:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:18.833 02:51:57 -- common/autotest_common.sh@10 -- # set +x 00:07:19.093 ************************************ 00:07:19.093 START TEST accel_crc32c_C2 00:07:19.093 ************************************ 00:07:19.093 02:51:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:19.093 02:51:58 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.093 02:51:58 -- accel/accel.sh@17 -- # local accel_module 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:19.093 02:51:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.093 02:51:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.093 02:51:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.093 02:51:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.093 02:51:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.093 02:51:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.093 02:51:58 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.093 02:51:58 -- accel/accel.sh@41 -- # jq -r . 00:07:19.093 [2024-04-23 02:51:58.038609] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:19.093 [2024-04-23 02:51:58.038742] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74857 ] 00:07:19.093 [2024-04-23 02:51:58.160468] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:19.093 [2024-04-23 02:51:58.178503] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.093 [2024-04-23 02:51:58.209808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val=0x1 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val=crc32c 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val=0 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.093 02:51:58 -- accel/accel.sh@20 -- # val=software 00:07:19.093 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.093 02:51:58 -- accel/accel.sh@22 -- # accel_module=software 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.093 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val=32 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val=32 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val=1 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val=Yes 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:19.352 02:51:58 -- accel/accel.sh@20 -- # val= 00:07:19.352 02:51:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # IFS=: 00:07:19.352 02:51:58 -- accel/accel.sh@19 -- # read -r var val 00:07:20.287 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.287 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.287 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.287 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.287 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.287 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.287 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.287 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.287 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.287 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.287 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.288 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.288 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.288 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.288 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.288 02:51:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:20.288 02:51:59 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:20.288 02:51:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.288 00:07:20.288 real 0m1.317s 00:07:20.288 user 0m1.147s 00:07:20.288 sys 0m0.078s 00:07:20.288 02:51:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:20.288 02:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:20.288 ************************************ 00:07:20.288 END TEST accel_crc32c_C2 00:07:20.288 ************************************ 00:07:20.288 02:51:59 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:20.288 02:51:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:20.288 02:51:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:20.288 02:51:59 -- common/autotest_common.sh@10 -- # set +x 00:07:20.288 ************************************ 00:07:20.288 START TEST accel_copy 00:07:20.288 ************************************ 00:07:20.288 02:51:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:07:20.288 02:51:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.288 02:51:59 -- accel/accel.sh@17 -- # local accel_module 00:07:20.288 02:51:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:20.288 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.288 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.288 02:51:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:20.288 02:51:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.288 02:51:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:20.288 02:51:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:20.288 02:51:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.288 02:51:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.288 02:51:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:20.288 02:51:59 -- accel/accel.sh@40 -- # local IFS=, 00:07:20.288 02:51:59 -- accel/accel.sh@41 -- # jq -r . 00:07:20.547 [2024-04-23 02:51:59.451233] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:20.547 [2024-04-23 02:51:59.451304] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74897 ] 00:07:20.547 [2024-04-23 02:51:59.565853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:20.547 [2024-04-23 02:51:59.580981] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.547 [2024-04-23 02:51:59.611360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=0x1 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=copy 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=software 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@22 -- # accel_module=software 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=32 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=32 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val=1 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.547 02:51:59 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:20.547 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.547 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.548 02:51:59 -- accel/accel.sh@20 -- # val=Yes 00:07:20.548 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.548 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.548 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:20.548 02:51:59 -- accel/accel.sh@20 -- # val= 00:07:20.548 02:51:59 -- accel/accel.sh@21 -- # case "$var" in 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # IFS=: 00:07:20.548 02:51:59 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:00 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.924 02:52:00 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:21.924 02:52:00 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.924 00:07:21.924 real 0m1.298s 00:07:21.924 user 0m1.145s 00:07:21.924 sys 0m0.063s 00:07:21.924 02:52:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:21.924 ************************************ 00:07:21.924 END TEST accel_copy 00:07:21.924 ************************************ 00:07:21.924 02:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.924 02:52:00 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.924 02:52:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:21.924 02:52:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.924 02:52:00 -- common/autotest_common.sh@10 -- # set +x 00:07:21.924 ************************************ 00:07:21.924 START TEST accel_fill 00:07:21.924 ************************************ 00:07:21.924 02:52:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.924 02:52:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:21.924 02:52:00 -- accel/accel.sh@17 -- # local accel_module 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.924 02:52:00 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:21.924 02:52:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.924 02:52:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.924 02:52:00 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.924 02:52:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.924 02:52:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.924 02:52:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.924 02:52:00 -- accel/accel.sh@40 -- # local IFS=, 00:07:21.924 02:52:00 -- accel/accel.sh@41 -- # jq -r . 00:07:21.924 [2024-04-23 02:52:00.860770] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:21.924 [2024-04-23 02:52:00.860845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74935 ] 00:07:21.924 [2024-04-23 02:52:00.980131] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:21.924 [2024-04-23 02:52:00.996026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.924 [2024-04-23 02:52:01.027097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.924 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.924 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.924 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.924 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.924 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=0x1 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=fill 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=0x80 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=software 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@22 -- # accel_module=software 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=64 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=64 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=1 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val=Yes 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:21.925 02:52:01 -- accel/accel.sh@20 -- # val= 00:07:21.925 02:52:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # IFS=: 00:07:21.925 02:52:01 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.302 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:23.302 02:52:02 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:23.302 02:52:02 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.302 00:07:23.302 real 0m1.306s 00:07:23.302 user 0m1.145s 00:07:23.302 sys 0m0.070s 00:07:23.302 02:52:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:23.302 02:52:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.302 ************************************ 00:07:23.302 END TEST accel_fill 00:07:23.302 ************************************ 00:07:23.302 02:52:02 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:23.302 02:52:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:23.302 02:52:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.302 02:52:02 -- common/autotest_common.sh@10 -- # set +x 00:07:23.302 ************************************ 00:07:23.302 START TEST accel_copy_crc32c 00:07:23.302 ************************************ 00:07:23.302 02:52:02 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:23.302 02:52:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.302 02:52:02 -- accel/accel.sh@17 -- # local accel_module 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.302 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.302 02:52:02 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:23.302 02:52:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:23.302 02:52:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.302 02:52:02 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.302 02:52:02 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.302 02:52:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.302 02:52:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.302 02:52:02 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.302 02:52:02 -- accel/accel.sh@40 -- # local IFS=, 00:07:23.302 02:52:02 -- accel/accel.sh@41 -- # jq -r . 00:07:23.302 [2024-04-23 02:52:02.277690] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:23.302 [2024-04-23 02:52:02.277762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74968 ] 00:07:23.302 [2024-04-23 02:52:02.392711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:23.302 [2024-04-23 02:52:02.408800] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.302 [2024-04-23 02:52:02.442777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=0x1 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=0 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=software 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@22 -- # accel_module=software 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=32 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=32 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=1 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.561 02:52:02 -- accel/accel.sh@20 -- # val=Yes 00:07:23.561 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.561 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.562 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.562 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.562 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.562 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:23.562 02:52:02 -- accel/accel.sh@20 -- # val= 00:07:23.562 02:52:02 -- accel/accel.sh@21 -- # case "$var" in 00:07:23.562 02:52:02 -- accel/accel.sh@19 -- # IFS=: 00:07:23.562 02:52:02 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.497 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.497 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.497 02:52:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.497 02:52:03 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:24.497 02:52:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.497 00:07:24.497 real 0m1.318s 00:07:24.497 user 0m1.152s 00:07:24.497 sys 0m0.071s 00:07:24.497 02:52:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:24.497 ************************************ 00:07:24.497 END TEST accel_copy_crc32c 00:07:24.497 ************************************ 00:07:24.497 02:52:03 -- common/autotest_common.sh@10 -- # set +x 00:07:24.497 02:52:03 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:24.497 02:52:03 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:24.497 02:52:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.497 02:52:03 -- common/autotest_common.sh@10 -- # set +x 00:07:24.755 ************************************ 00:07:24.755 START TEST accel_copy_crc32c_C2 00:07:24.755 ************************************ 00:07:24.755 02:52:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:24.755 02:52:03 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.755 02:52:03 -- accel/accel.sh@17 -- # local accel_module 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 02:52:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:24.755 02:52:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:24.755 02:52:03 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.755 02:52:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.755 02:52:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.755 02:52:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.755 02:52:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.755 02:52:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.755 02:52:03 -- accel/accel.sh@40 -- # local IFS=, 00:07:24.755 02:52:03 -- accel/accel.sh@41 -- # jq -r . 00:07:24.755 [2024-04-23 02:52:03.712453] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:24.755 [2024-04-23 02:52:03.712535] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75007 ] 00:07:24.755 [2024-04-23 02:52:03.831989] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:24.755 [2024-04-23 02:52:03.850183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.755 [2024-04-23 02:52:03.880839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.755 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.755 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:24.755 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 02:52:03 -- accel/accel.sh@20 -- # val=0x1 00:07:24.755 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:24.755 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:24.755 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=0 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=software 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=32 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=32 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=1 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val=Yes 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.014 02:52:03 -- accel/accel.sh@20 -- # val= 00:07:25.014 02:52:03 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # IFS=: 00:07:25.014 02:52:03 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:04 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:04 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.950 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:25.950 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.950 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:25.951 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:25.951 02:52:05 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.951 02:52:05 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:25.951 02:52:05 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.951 00:07:25.951 real 0m1.321s 00:07:25.951 user 0m1.158s 00:07:25.951 sys 0m0.073s 00:07:25.951 02:52:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:25.951 ************************************ 00:07:25.951 END TEST accel_copy_crc32c_C2 00:07:25.951 ************************************ 00:07:25.951 02:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:25.951 02:52:05 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:25.951 02:52:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:25.951 02:52:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:25.951 02:52:05 -- common/autotest_common.sh@10 -- # set +x 00:07:26.210 ************************************ 00:07:26.210 START TEST accel_dualcast 00:07:26.210 ************************************ 00:07:26.210 02:52:05 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:26.210 02:52:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.210 02:52:05 -- accel/accel.sh@17 -- # local accel_module 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.210 02:52:05 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.210 02:52:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:26.210 02:52:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.210 02:52:05 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.210 02:52:05 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.210 02:52:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.210 02:52:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.210 02:52:05 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.210 02:52:05 -- accel/accel.sh@40 -- # local IFS=, 00:07:26.210 02:52:05 -- accel/accel.sh@41 -- # jq -r . 00:07:26.210 [2024-04-23 02:52:05.140274] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:26.210 [2024-04-23 02:52:05.140377] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75045 ] 00:07:26.210 [2024-04-23 02:52:05.255113] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:26.210 [2024-04-23 02:52:05.270059] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.210 [2024-04-23 02:52:05.300653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.210 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.210 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.210 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.210 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.210 02:52:05 -- accel/accel.sh@20 -- # val=0x1 00:07:26.210 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.210 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.210 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.210 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=dualcast 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=software 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@22 -- # accel_module=software 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=32 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=32 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=1 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val=Yes 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:26.211 02:52:05 -- accel/accel.sh@20 -- # val= 00:07:26.211 02:52:05 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # IFS=: 00:07:26.211 02:52:05 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.591 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.591 02:52:06 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:27.591 02:52:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.591 00:07:27.591 real 0m1.299s 00:07:27.591 user 0m1.150s 00:07:27.591 sys 0m0.059s 00:07:27.591 02:52:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:27.591 ************************************ 00:07:27.591 END TEST accel_dualcast 00:07:27.591 ************************************ 00:07:27.591 02:52:06 -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 02:52:06 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:27.591 02:52:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:27.591 02:52:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.591 02:52:06 -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 ************************************ 00:07:27.591 START TEST accel_compare 00:07:27.591 ************************************ 00:07:27.591 02:52:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:27.591 02:52:06 -- accel/accel.sh@16 -- # local accel_opc 00:07:27.591 02:52:06 -- accel/accel.sh@17 -- # local accel_module 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.591 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.591 02:52:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:27.591 02:52:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:27.591 02:52:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.591 02:52:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.591 02:52:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.591 02:52:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.591 02:52:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.591 02:52:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.591 02:52:06 -- accel/accel.sh@40 -- # local IFS=, 00:07:27.591 02:52:06 -- accel/accel.sh@41 -- # jq -r . 00:07:27.591 [2024-04-23 02:52:06.552825] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:27.591 [2024-04-23 02:52:06.552904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75078 ] 00:07:27.591 [2024-04-23 02:52:06.672257] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:27.591 [2024-04-23 02:52:06.689967] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.591 [2024-04-23 02:52:06.723936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=0x1 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=compare 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=software 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@22 -- # accel_module=software 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=32 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=32 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=1 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val=Yes 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:27.850 02:52:06 -- accel/accel.sh@20 -- # val= 00:07:27.850 02:52:06 -- accel/accel.sh@21 -- # case "$var" in 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # IFS=: 00:07:27.850 02:52:06 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@20 -- # val= 00:07:28.787 02:52:07 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:28.787 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:28.787 02:52:07 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:28.787 02:52:07 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:28.787 02:52:07 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.787 00:07:28.787 real 0m1.321s 00:07:28.787 user 0m1.165s 00:07:28.787 sys 0m0.064s 00:07:28.787 02:52:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.787 ************************************ 00:07:28.787 END TEST accel_compare 00:07:28.787 ************************************ 00:07:28.787 02:52:07 -- common/autotest_common.sh@10 -- # set +x 00:07:28.787 02:52:07 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:28.787 02:52:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:28.787 02:52:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.787 02:52:07 -- common/autotest_common.sh@10 -- # set +x 00:07:29.046 ************************************ 00:07:29.047 START TEST accel_xor 00:07:29.047 ************************************ 00:07:29.047 02:52:07 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:29.047 02:52:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.047 02:52:07 -- accel/accel.sh@17 -- # local accel_module 00:07:29.047 02:52:07 -- accel/accel.sh@19 -- # IFS=: 00:07:29.047 02:52:07 -- accel/accel.sh@19 -- # read -r var val 00:07:29.047 02:52:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:29.047 02:52:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:29.047 02:52:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.047 02:52:07 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.047 02:52:07 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.047 02:52:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.047 02:52:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.047 02:52:07 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.047 02:52:07 -- accel/accel.sh@40 -- # local IFS=, 00:07:29.047 02:52:07 -- accel/accel.sh@41 -- # jq -r . 00:07:29.047 [2024-04-23 02:52:07.999187] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:29.047 [2024-04-23 02:52:07.999280] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75117 ] 00:07:29.047 [2024-04-23 02:52:08.118897] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:29.047 [2024-04-23 02:52:08.138929] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.047 [2024-04-23 02:52:08.182695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=0x1 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=xor 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=2 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=software 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@22 -- # accel_module=software 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=32 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=32 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=1 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val=Yes 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:29.306 02:52:08 -- accel/accel.sh@20 -- # val= 00:07:29.306 02:52:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # IFS=: 00:07:29.306 02:52:08 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.247 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.247 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.247 02:52:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.247 02:52:09 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:30.247 02:52:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.247 00:07:30.247 real 0m1.336s 00:07:30.247 user 0m0.019s 00:07:30.247 sys 0m0.003s 00:07:30.247 02:52:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:30.247 02:52:09 -- common/autotest_common.sh@10 -- # set +x 00:07:30.247 ************************************ 00:07:30.247 END TEST accel_xor 00:07:30.247 ************************************ 00:07:30.247 02:52:09 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:30.247 02:52:09 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:30.247 02:52:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:30.247 02:52:09 -- common/autotest_common.sh@10 -- # set +x 00:07:30.506 ************************************ 00:07:30.506 START TEST accel_xor 00:07:30.506 ************************************ 00:07:30.506 02:52:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:07:30.506 02:52:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.506 02:52:09 -- accel/accel.sh@17 -- # local accel_module 00:07:30.506 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.506 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.506 02:52:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:30.506 02:52:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:30.507 02:52:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.507 02:52:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.507 02:52:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.507 02:52:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.507 02:52:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.507 02:52:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.507 02:52:09 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.507 02:52:09 -- accel/accel.sh@41 -- # jq -r . 00:07:30.507 [2024-04-23 02:52:09.447843] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:30.507 [2024-04-23 02:52:09.447967] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75156 ] 00:07:30.507 [2024-04-23 02:52:09.568756] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:30.507 [2024-04-23 02:52:09.588360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.507 [2024-04-23 02:52:09.623991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=0x1 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=xor 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@23 -- # accel_opc=xor 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=3 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=software 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=32 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=32 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val=1 00:07:30.507 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.507 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.507 02:52:09 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.766 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.766 02:52:09 -- accel/accel.sh@20 -- # val=Yes 00:07:30.766 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.766 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.766 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:30.766 02:52:09 -- accel/accel.sh@20 -- # val= 00:07:30.766 02:52:09 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # IFS=: 00:07:30.766 02:52:09 -- accel/accel.sh@19 -- # read -r var val 00:07:31.704 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.704 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.704 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.704 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.704 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.704 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.704 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.704 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.704 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.705 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.705 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.705 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.705 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.705 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.705 02:52:10 -- accel/accel.sh@20 -- # val= 00:07:31.705 02:52:10 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.705 02:52:10 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.705 02:52:10 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:31.705 02:52:10 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.705 00:07:31.705 real 0m1.324s 00:07:31.705 user 0m1.161s 00:07:31.705 sys 0m0.070s 00:07:31.705 ************************************ 00:07:31.705 END TEST accel_xor 00:07:31.705 02:52:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:31.705 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.705 ************************************ 00:07:31.705 02:52:10 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:31.705 02:52:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:31.705 02:52:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.705 02:52:10 -- common/autotest_common.sh@10 -- # set +x 00:07:31.705 ************************************ 00:07:31.705 START TEST accel_dif_verify 00:07:31.705 ************************************ 00:07:31.705 02:52:10 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:07:31.705 02:52:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.705 02:52:10 -- accel/accel.sh@17 -- # local accel_module 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # IFS=: 00:07:31.705 02:52:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:31.705 02:52:10 -- accel/accel.sh@19 -- # read -r var val 00:07:31.705 02:52:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:31.705 02:52:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.705 02:52:10 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.705 02:52:10 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.705 02:52:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.965 02:52:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.965 02:52:10 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.965 02:52:10 -- accel/accel.sh@40 -- # local IFS=, 00:07:31.965 02:52:10 -- accel/accel.sh@41 -- # jq -r . 00:07:31.965 [2024-04-23 02:52:10.875594] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:31.965 [2024-04-23 02:52:10.875710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75195 ] 00:07:31.965 [2024-04-23 02:52:10.990584] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:31.965 [2024-04-23 02:52:11.004886] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.965 [2024-04-23 02:52:11.035322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=0x1 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=dif_verify 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=software 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@22 -- # accel_module=software 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=32 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=32 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=1 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val=No 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:31.965 02:52:11 -- accel/accel.sh@20 -- # val= 00:07:31.965 02:52:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # IFS=: 00:07:31.965 02:52:11 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.343 02:52:12 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:33.343 02:52:12 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.343 00:07:33.343 real 0m1.297s 00:07:33.343 user 0m1.150s 00:07:33.343 sys 0m0.058s 00:07:33.343 02:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:33.343 ************************************ 00:07:33.343 END TEST accel_dif_verify 00:07:33.343 ************************************ 00:07:33.343 02:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:33.343 02:52:12 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:33.343 02:52:12 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:33.343 02:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:33.343 02:52:12 -- common/autotest_common.sh@10 -- # set +x 00:07:33.343 ************************************ 00:07:33.343 START TEST accel_dif_generate 00:07:33.343 ************************************ 00:07:33.343 02:52:12 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:07:33.343 02:52:12 -- accel/accel.sh@16 -- # local accel_opc 00:07:33.343 02:52:12 -- accel/accel.sh@17 -- # local accel_module 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:33.343 02:52:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.343 02:52:12 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.343 02:52:12 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.343 02:52:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.343 02:52:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.343 02:52:12 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.343 02:52:12 -- accel/accel.sh@40 -- # local IFS=, 00:07:33.343 02:52:12 -- accel/accel.sh@41 -- # jq -r . 00:07:33.343 [2024-04-23 02:52:12.281326] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:33.343 [2024-04-23 02:52:12.281404] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75229 ] 00:07:33.343 [2024-04-23 02:52:12.400382] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:33.343 [2024-04-23 02:52:12.410498] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.343 [2024-04-23 02:52:12.440765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=0x1 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=dif_generate 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val='512 bytes' 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val='8 bytes' 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=software 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@22 -- # accel_module=software 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=32 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=32 00:07:33.343 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.343 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.343 02:52:12 -- accel/accel.sh@20 -- # val=1 00:07:33.344 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.344 02:52:12 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.344 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.344 02:52:12 -- accel/accel.sh@20 -- # val=No 00:07:33.344 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.344 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.344 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:33.344 02:52:12 -- accel/accel.sh@20 -- # val= 00:07:33.344 02:52:12 -- accel/accel.sh@21 -- # case "$var" in 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # IFS=: 00:07:33.344 02:52:12 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 ************************************ 00:07:34.721 END TEST accel_dif_generate 00:07:34.721 ************************************ 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.721 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.721 02:52:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.721 02:52:13 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:34.721 02:52:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.721 00:07:34.721 real 0m1.300s 00:07:34.721 user 0m1.143s 00:07:34.721 sys 0m0.066s 00:07:34.721 02:52:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:34.721 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.721 02:52:13 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:34.721 02:52:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:34.721 02:52:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.721 02:52:13 -- common/autotest_common.sh@10 -- # set +x 00:07:34.721 ************************************ 00:07:34.721 START TEST accel_dif_generate_copy 00:07:34.721 ************************************ 00:07:34.721 02:52:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:07:34.721 02:52:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.721 02:52:13 -- accel/accel.sh@17 -- # local accel_module 00:07:34.721 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.722 02:52:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:34.722 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.722 02:52:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:34.722 02:52:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.722 02:52:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.722 02:52:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.722 02:52:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.722 02:52:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.722 02:52:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.722 02:52:13 -- accel/accel.sh@40 -- # local IFS=, 00:07:34.722 02:52:13 -- accel/accel.sh@41 -- # jq -r . 00:07:34.722 [2024-04-23 02:52:13.703799] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:34.722 [2024-04-23 02:52:13.703870] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75268 ] 00:07:34.722 [2024-04-23 02:52:13.819002] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:34.722 [2024-04-23 02:52:13.836412] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.722 [2024-04-23 02:52:13.865479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=0x1 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=software 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@22 -- # accel_module=software 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=32 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=32 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=1 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val=No 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:34.981 02:52:13 -- accel/accel.sh@20 -- # val= 00:07:34.981 02:52:13 -- accel/accel.sh@21 -- # case "$var" in 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # IFS=: 00:07:34.981 02:52:13 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@20 -- # val= 00:07:35.919 02:52:14 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # IFS=: 00:07:35.919 02:52:14 -- accel/accel.sh@19 -- # read -r var val 00:07:35.919 02:52:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:35.919 02:52:14 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:35.919 02:52:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.919 00:07:35.919 real 0m1.315s 00:07:35.919 user 0m1.158s 00:07:35.919 sys 0m0.064s 00:07:35.919 02:52:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:35.919 02:52:14 -- common/autotest_common.sh@10 -- # set +x 00:07:35.919 ************************************ 00:07:35.919 END TEST accel_dif_generate_copy 00:07:35.919 ************************************ 00:07:35.919 02:52:15 -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:35.919 02:52:15 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.919 02:52:15 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:35.919 02:52:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:35.919 02:52:15 -- common/autotest_common.sh@10 -- # set +x 00:07:36.178 ************************************ 00:07:36.178 START TEST accel_comp 00:07:36.178 ************************************ 00:07:36.178 02:52:15 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.178 02:52:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:36.178 02:52:15 -- accel/accel.sh@17 -- # local accel_module 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.178 02:52:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.178 02:52:15 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.178 02:52:15 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.178 02:52:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.178 02:52:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.178 02:52:15 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.178 02:52:15 -- accel/accel.sh@40 -- # local IFS=, 00:07:36.178 02:52:15 -- accel/accel.sh@41 -- # jq -r . 00:07:36.178 [2024-04-23 02:52:15.128627] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:36.178 [2024-04-23 02:52:15.128702] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75307 ] 00:07:36.178 [2024-04-23 02:52:15.242874] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:36.178 [2024-04-23 02:52:15.252654] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.178 [2024-04-23 02:52:15.285108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val=0x1 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val=compress 00:07:36.178 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.178 02:52:15 -- accel/accel.sh@23 -- # accel_opc=compress 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.178 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.178 02:52:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=software 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@22 -- # accel_module=software 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=32 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=32 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=1 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val=No 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:36.179 02:52:15 -- accel/accel.sh@20 -- # val= 00:07:36.179 02:52:15 -- accel/accel.sh@21 -- # case "$var" in 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # IFS=: 00:07:36.179 02:52:15 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.559 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.559 02:52:16 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:37.559 02:52:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.559 00:07:37.559 real 0m1.300s 00:07:37.559 user 0m1.146s 00:07:37.559 sys 0m0.062s 00:07:37.559 02:52:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:37.559 02:52:16 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 END TEST accel_comp 00:07:37.559 ************************************ 00:07:37.559 02:52:16 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.559 02:52:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:37.559 02:52:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.559 02:52:16 -- common/autotest_common.sh@10 -- # set +x 00:07:37.559 ************************************ 00:07:37.559 START TEST accel_decomp 00:07:37.559 ************************************ 00:07:37.559 02:52:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.559 02:52:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.559 02:52:16 -- accel/accel.sh@17 -- # local accel_module 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.559 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.559 02:52:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.559 02:52:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.559 02:52:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:37.559 02:52:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.559 02:52:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.559 02:52:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.559 02:52:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.559 02:52:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.559 02:52:16 -- accel/accel.sh@40 -- # local IFS=, 00:07:37.559 02:52:16 -- accel/accel.sh@41 -- # jq -r . 00:07:37.559 [2024-04-23 02:52:16.547882] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:37.559 [2024-04-23 02:52:16.547996] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75340 ] 00:07:37.559 [2024-04-23 02:52:16.669523] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:37.559 [2024-04-23 02:52:16.684776] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.826 [2024-04-23 02:52:16.716857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=0x1 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=decompress 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=software 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@22 -- # accel_module=software 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=32 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=32 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=1 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val=Yes 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:37.826 02:52:16 -- accel/accel.sh@20 -- # val= 00:07:37.826 02:52:16 -- accel/accel.sh@21 -- # case "$var" in 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # IFS=: 00:07:37.826 02:52:16 -- accel/accel.sh@19 -- # read -r var val 00:07:38.763 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.763 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.763 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.763 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.763 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.763 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.763 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.763 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.763 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.763 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.764 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.764 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.764 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.764 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.764 02:52:17 -- accel/accel.sh@20 -- # val= 00:07:38.764 02:52:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:38.764 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:38.764 02:52:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.764 02:52:17 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:38.764 02:52:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.764 00:07:38.764 real 0m1.329s 00:07:38.764 user 0m1.161s 00:07:38.764 sys 0m0.075s 00:07:38.764 02:52:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.764 02:52:17 -- common/autotest_common.sh@10 -- # set +x 00:07:38.764 ************************************ 00:07:38.764 END TEST accel_decomp 00:07:38.764 ************************************ 00:07:38.764 02:52:17 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.764 02:52:17 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:38.764 02:52:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.764 02:52:17 -- common/autotest_common.sh@10 -- # set +x 00:07:39.023 ************************************ 00:07:39.023 START TEST accel_decmop_full 00:07:39.023 ************************************ 00:07:39.023 02:52:17 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.023 02:52:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:39.023 02:52:17 -- accel/accel.sh@17 -- # local accel_module 00:07:39.023 02:52:17 -- accel/accel.sh@19 -- # IFS=: 00:07:39.023 02:52:17 -- accel/accel.sh@19 -- # read -r var val 00:07:39.023 02:52:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.023 02:52:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.023 02:52:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.023 02:52:17 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:39.023 02:52:17 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:39.023 02:52:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.023 02:52:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.023 02:52:17 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:39.023 02:52:17 -- accel/accel.sh@40 -- # local IFS=, 00:07:39.023 02:52:17 -- accel/accel.sh@41 -- # jq -r . 00:07:39.023 [2024-04-23 02:52:17.988678] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:39.023 [2024-04-23 02:52:17.989170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75384 ] 00:07:39.023 [2024-04-23 02:52:18.109699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:39.023 [2024-04-23 02:52:18.125905] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.023 [2024-04-23 02:52:18.155962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=0x1 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=decompress 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=software 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@22 -- # accel_module=software 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=32 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=32 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=1 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val=Yes 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:39.283 02:52:18 -- accel/accel.sh@20 -- # val= 00:07:39.283 02:52:18 -- accel/accel.sh@21 -- # case "$var" in 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # IFS=: 00:07:39.283 02:52:18 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.221 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.221 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.221 02:52:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.221 02:52:19 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:40.221 02:52:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.221 00:07:40.221 real 0m1.324s 00:07:40.221 user 0m1.169s 00:07:40.221 sys 0m0.062s 00:07:40.221 ************************************ 00:07:40.221 END TEST accel_decmop_full 00:07:40.221 ************************************ 00:07:40.221 02:52:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:40.221 02:52:19 -- common/autotest_common.sh@10 -- # set +x 00:07:40.221 02:52:19 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.221 02:52:19 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:40.221 02:52:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.221 02:52:19 -- common/autotest_common.sh@10 -- # set +x 00:07:40.480 ************************************ 00:07:40.480 START TEST accel_decomp_mcore 00:07:40.480 ************************************ 00:07:40.480 02:52:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.480 02:52:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.481 02:52:19 -- accel/accel.sh@17 -- # local accel_module 00:07:40.481 02:52:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.481 02:52:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.481 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.481 02:52:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.481 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.481 02:52:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.481 02:52:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.481 02:52:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.481 02:52:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.481 02:52:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.481 02:52:19 -- accel/accel.sh@40 -- # local IFS=, 00:07:40.481 02:52:19 -- accel/accel.sh@41 -- # jq -r . 00:07:40.481 [2024-04-23 02:52:19.420788] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:40.481 [2024-04-23 02:52:19.420863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75417 ] 00:07:40.481 [2024-04-23 02:52:19.536233] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:40.481 [2024-04-23 02:52:19.549384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:40.481 [2024-04-23 02:52:19.583015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.481 [2024-04-23 02:52:19.583193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:40.481 [2024-04-23 02:52:19.583561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:40.481 [2024-04-23 02:52:19.583565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val=0xf 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val=decompress 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.740 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.740 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.740 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=software 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@22 -- # accel_module=software 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=32 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=32 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=1 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val=Yes 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:40.741 02:52:19 -- accel/accel.sh@20 -- # val= 00:07:40.741 02:52:19 -- accel/accel.sh@21 -- # case "$var" in 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # IFS=: 00:07:40.741 02:52:19 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@20 -- # val= 00:07:41.678 02:52:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.678 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.678 02:52:20 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.678 02:52:20 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:41.678 02:52:20 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.678 ************************************ 00:07:41.678 END TEST accel_decomp_mcore 00:07:41.678 ************************************ 00:07:41.678 00:07:41.678 real 0m1.337s 00:07:41.678 user 0m4.401s 00:07:41.678 sys 0m0.090s 00:07:41.678 02:52:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:41.678 02:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.678 02:52:20 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.678 02:52:20 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:41.678 02:52:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.678 02:52:20 -- common/autotest_common.sh@10 -- # set +x 00:07:41.938 ************************************ 00:07:41.938 START TEST accel_decomp_full_mcore 00:07:41.938 ************************************ 00:07:41.938 02:52:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.938 02:52:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.938 02:52:20 -- accel/accel.sh@17 -- # local accel_module 00:07:41.938 02:52:20 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.938 02:52:20 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:41.938 02:52:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.938 02:52:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.938 02:52:20 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.938 02:52:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.938 02:52:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.938 02:52:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.938 02:52:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:41.938 02:52:20 -- accel/accel.sh@41 -- # jq -r . 00:07:41.938 [2024-04-23 02:52:20.871795] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:41.938 [2024-04-23 02:52:20.872621] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75464 ] 00:07:41.938 [2024-04-23 02:52:20.989326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:41.938 [2024-04-23 02:52:21.004120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.938 [2024-04-23 02:52:21.036623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.938 [2024-04-23 02:52:21.036765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.938 [2024-04-23 02:52:21.036866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.938 [2024-04-23 02:52:21.037147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val=0xf 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val=decompress 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val=software 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.938 02:52:21 -- accel/accel.sh@22 -- # accel_module=software 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.938 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.938 02:52:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:41.938 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val=32 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val=32 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val=1 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val=Yes 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:41.939 02:52:21 -- accel/accel.sh@20 -- # val= 00:07:41.939 02:52:21 -- accel/accel.sh@21 -- # case "$var" in 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # IFS=: 00:07:41.939 02:52:21 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.318 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.318 02:52:22 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.318 02:52:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.318 00:07:43.318 real 0m1.333s 00:07:43.318 user 0m4.401s 00:07:43.318 sys 0m0.082s 00:07:43.318 02:52:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:43.318 02:52:22 -- common/autotest_common.sh@10 -- # set +x 00:07:43.318 ************************************ 00:07:43.318 END TEST accel_decomp_full_mcore 00:07:43.318 ************************************ 00:07:43.318 02:52:22 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.318 02:52:22 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:43.318 02:52:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:43.318 02:52:22 -- common/autotest_common.sh@10 -- # set +x 00:07:43.318 ************************************ 00:07:43.318 START TEST accel_decomp_mthread 00:07:43.318 ************************************ 00:07:43.318 02:52:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.318 02:52:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.318 02:52:22 -- accel/accel.sh@17 -- # local accel_module 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.318 02:52:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.318 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.318 02:52:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:43.318 02:52:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.318 02:52:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.318 02:52:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.318 02:52:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.318 02:52:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.318 02:52:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.318 02:52:22 -- accel/accel.sh@40 -- # local IFS=, 00:07:43.318 02:52:22 -- accel/accel.sh@41 -- # jq -r . 00:07:43.318 [2024-04-23 02:52:22.318928] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:43.318 [2024-04-23 02:52:22.319002] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75500 ] 00:07:43.318 [2024-04-23 02:52:22.433765] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:43.318 [2024-04-23 02:52:22.451283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.577 [2024-04-23 02:52:22.484049] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.577 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.577 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.577 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val=0x1 00:07:43.577 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.577 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.577 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.577 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=decompress 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=software 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@22 -- # accel_module=software 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=32 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=32 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=2 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val=Yes 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:43.578 02:52:22 -- accel/accel.sh@20 -- # val= 00:07:43.578 02:52:22 -- accel/accel.sh@21 -- # case "$var" in 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # IFS=: 00:07:43.578 02:52:22 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:44.515 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.515 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.515 02:52:23 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:44.515 02:52:23 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:44.515 02:52:23 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.515 00:07:44.515 real 0m1.315s 00:07:44.515 user 0m1.153s 00:07:44.515 sys 0m0.071s 00:07:44.515 02:52:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:44.515 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.515 ************************************ 00:07:44.515 END TEST accel_decomp_mthread 00:07:44.515 ************************************ 00:07:44.515 02:52:23 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.515 02:52:23 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:44.515 02:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.515 02:52:23 -- common/autotest_common.sh@10 -- # set +x 00:07:44.774 ************************************ 00:07:44.774 START TEST accel_deomp_full_mthread 00:07:44.774 ************************************ 00:07:44.774 02:52:23 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.774 02:52:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.774 02:52:23 -- accel/accel.sh@17 -- # local accel_module 00:07:44.774 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:44.774 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:44.774 02:52:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.774 02:52:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:44.774 02:52:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.774 02:52:23 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:44.774 02:52:23 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:44.774 02:52:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.774 02:52:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.774 02:52:23 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:44.774 02:52:23 -- accel/accel.sh@40 -- # local IFS=, 00:07:44.774 02:52:23 -- accel/accel.sh@41 -- # jq -r . 00:07:44.774 [2024-04-23 02:52:23.774838] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:44.774 [2024-04-23 02:52:23.774931] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75533 ] 00:07:44.774 [2024-04-23 02:52:23.894772] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:44.774 [2024-04-23 02:52:23.915212] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.033 [2024-04-23 02:52:23.950965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=0x1 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=decompress 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=software 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@22 -- # accel_module=software 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=32 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=32 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=2 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val=Yes 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.033 02:52:23 -- accel/accel.sh@20 -- # val= 00:07:45.033 02:52:23 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # IFS=: 00:07:45.033 02:52:23 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@20 -- # val= 00:07:45.991 02:52:25 -- accel/accel.sh@21 -- # case "$var" in 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # IFS=: 00:07:45.991 02:52:25 -- accel/accel.sh@19 -- # read -r var val 00:07:45.991 02:52:25 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.991 02:52:25 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.991 02:52:25 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.991 00:07:45.991 real 0m1.365s 00:07:45.991 user 0m1.198s 00:07:45.991 sys 0m0.075s 00:07:45.991 02:52:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:45.991 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:45.991 ************************************ 00:07:45.991 END TEST accel_deomp_full_mthread 00:07:45.991 ************************************ 00:07:46.249 02:52:25 -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:46.249 02:52:25 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.249 02:52:25 -- accel/accel.sh@137 -- # build_accel_config 00:07:46.249 02:52:25 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:46.249 02:52:25 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.249 02:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.249 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.249 02:52:25 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.250 02:52:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.250 02:52:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.250 02:52:25 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.250 02:52:25 -- accel/accel.sh@40 -- # local IFS=, 00:07:46.250 02:52:25 -- accel/accel.sh@41 -- # jq -r . 00:07:46.250 ************************************ 00:07:46.250 START TEST accel_dif_functional_tests 00:07:46.250 ************************************ 00:07:46.250 02:52:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:46.250 [2024-04-23 02:52:25.278609] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:46.250 [2024-04-23 02:52:25.278693] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75579 ] 00:07:46.250 [2024-04-23 02:52:25.400220] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:46.508 [2024-04-23 02:52:25.419288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:46.508 [2024-04-23 02:52:25.452273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.508 [2024-04-23 02:52:25.452404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.508 [2024-04-23 02:52:25.452406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.508 00:07:46.508 00:07:46.508 CUnit - A unit testing framework for C - Version 2.1-3 00:07:46.508 http://cunit.sourceforge.net/ 00:07:46.508 00:07:46.508 00:07:46.508 Suite: accel_dif 00:07:46.508 Test: verify: DIF generated, GUARD check ...passed 00:07:46.508 Test: verify: DIF generated, APPTAG check ...passed 00:07:46.508 Test: verify: DIF generated, REFTAG check ...passed 00:07:46.508 Test: verify: DIF not generated, GUARD check ...passed 00:07:46.508 Test: verify: DIF not generated, APPTAG check ...[2024-04-23 02:52:25.497918] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.508 [2024-04-23 02:52:25.498039] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:46.508 [2024-04-23 02:52:25.498078] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.508 passed 00:07:46.508 Test: verify: DIF not generated, REFTAG check ...[2024-04-23 02:52:25.498107] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:46.508 [2024-04-23 02:52:25.498166] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.508 passed 00:07:46.508 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:46.508 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-23 02:52:25.498274] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:46.508 [2024-04-23 02:52:25.498346] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:46.508 passed 00:07:46.508 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:46.508 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:46.508 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:46.508 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-23 02:52:25.498590] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:46.508 passed 00:07:46.508 Test: generate copy: DIF generated, GUARD check ...passed 00:07:46.508 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:46.508 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:46.508 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:46.508 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:46.508 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:46.508 Test: generate copy: iovecs-len validate ...passed 00:07:46.508 Test: generate copy: buffer alignment validate ...[2024-04-23 02:52:25.498968] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:46.508 passed 00:07:46.508 00:07:46.508 Run Summary: Type Total Ran Passed Failed Inactive 00:07:46.508 suites 1 1 n/a 0 0 00:07:46.508 tests 20 20 20 0 0 00:07:46.508 asserts 204 204 204 0 n/a 00:07:46.508 00:07:46.508 Elapsed time = 0.002 seconds 00:07:46.508 ************************************ 00:07:46.508 END TEST accel_dif_functional_tests 00:07:46.508 ************************************ 00:07:46.508 00:07:46.508 real 0m0.411s 00:07:46.508 user 0m0.414s 00:07:46.508 sys 0m0.093s 00:07:46.508 02:52:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.508 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.767 00:07:46.767 real 0m31.188s 00:07:46.767 user 0m32.407s 00:07:46.767 sys 0m3.380s 00:07:46.767 02:52:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:46.767 ************************************ 00:07:46.767 END TEST accel 00:07:46.767 ************************************ 00:07:46.767 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.767 02:52:25 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.767 02:52:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:46.768 02:52:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:46.768 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:46.768 ************************************ 00:07:46.768 START TEST accel_rpc 00:07:46.768 ************************************ 00:07:46.768 02:52:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:46.768 * Looking for test storage... 00:07:46.768 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:46.768 02:52:25 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:46.768 02:52:25 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=75643 00:07:46.768 02:52:25 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:46.768 02:52:25 -- accel/accel_rpc.sh@15 -- # waitforlisten 75643 00:07:46.768 02:52:25 -- common/autotest_common.sh@817 -- # '[' -z 75643 ']' 00:07:46.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.768 02:52:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.768 02:52:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:46.768 02:52:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.768 02:52:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:46.768 02:52:25 -- common/autotest_common.sh@10 -- # set +x 00:07:47.027 [2024-04-23 02:52:25.950717] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:47.027 [2024-04-23 02:52:25.950797] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75643 ] 00:07:47.027 [2024-04-23 02:52:26.071716] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:47.027 [2024-04-23 02:52:26.091363] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.027 [2024-04-23 02:52:26.124279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.027 02:52:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:47.027 02:52:26 -- common/autotest_common.sh@850 -- # return 0 00:07:47.027 02:52:26 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:47.027 02:52:26 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:47.027 02:52:26 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:47.027 02:52:26 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:47.027 02:52:26 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:47.027 02:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.027 02:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.027 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 ************************************ 00:07:47.287 START TEST accel_assign_opcode 00:07:47.287 ************************************ 00:07:47.287 02:52:26 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:47.287 02:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.287 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 [2024-04-23 02:52:26.245085] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:47.287 02:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:47.287 02:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.287 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 [2024-04-23 02:52:26.257079] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:47.287 02:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:47.287 02:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.287 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 02:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:47.287 02:52:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:47.287 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.287 02:52:26 -- accel/accel_rpc.sh@42 -- # grep software 00:07:47.287 02:52:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:47.287 software 00:07:47.287 ************************************ 00:07:47.287 END TEST accel_assign_opcode 00:07:47.287 ************************************ 00:07:47.287 00:07:47.287 real 0m0.187s 00:07:47.287 user 0m0.054s 00:07:47.287 sys 0m0.013s 00:07:47.287 02:52:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.287 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.547 02:52:26 -- accel/accel_rpc.sh@55 -- # killprocess 75643 00:07:47.547 02:52:26 -- common/autotest_common.sh@936 -- # '[' -z 75643 ']' 00:07:47.547 02:52:26 -- common/autotest_common.sh@940 -- # kill -0 75643 00:07:47.547 02:52:26 -- common/autotest_common.sh@941 -- # uname 00:07:47.547 02:52:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:47.547 02:52:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75643 00:07:47.547 killing process with pid 75643 00:07:47.547 02:52:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:47.547 02:52:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:47.547 02:52:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75643' 00:07:47.547 02:52:26 -- common/autotest_common.sh@955 -- # kill 75643 00:07:47.547 02:52:26 -- common/autotest_common.sh@960 -- # wait 75643 00:07:47.806 ************************************ 00:07:47.806 END TEST accel_rpc 00:07:47.806 ************************************ 00:07:47.806 00:07:47.806 real 0m0.921s 00:07:47.806 user 0m0.959s 00:07:47.806 sys 0m0.308s 00:07:47.806 02:52:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:47.806 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 02:52:26 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.806 02:52:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:47.806 02:52:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:47.806 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:47.806 ************************************ 00:07:47.806 START TEST app_cmdline 00:07:47.806 ************************************ 00:07:47.806 02:52:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:47.806 * Looking for test storage... 00:07:47.806 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:47.806 02:52:26 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:47.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.806 02:52:26 -- app/cmdline.sh@17 -- # spdk_tgt_pid=75734 00:07:47.806 02:52:26 -- app/cmdline.sh@18 -- # waitforlisten 75734 00:07:47.806 02:52:26 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:47.806 02:52:26 -- common/autotest_common.sh@817 -- # '[' -z 75734 ']' 00:07:47.806 02:52:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.806 02:52:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:07:47.806 02:52:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.806 02:52:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:07:47.806 02:52:26 -- common/autotest_common.sh@10 -- # set +x 00:07:48.065 [2024-04-23 02:52:26.980377] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:48.065 [2024-04-23 02:52:26.980464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75734 ] 00:07:48.065 [2024-04-23 02:52:27.101390] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:48.065 [2024-04-23 02:52:27.119237] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.065 [2024-04-23 02:52:27.151841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.325 02:52:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:07:48.325 02:52:27 -- common/autotest_common.sh@850 -- # return 0 00:07:48.325 02:52:27 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:48.585 { 00:07:48.585 "version": "SPDK v24.05-pre git sha1 a1264177c", 00:07:48.585 "fields": { 00:07:48.585 "major": 24, 00:07:48.585 "minor": 5, 00:07:48.585 "patch": 0, 00:07:48.585 "suffix": "-pre", 00:07:48.585 "commit": "a1264177c" 00:07:48.585 } 00:07:48.585 } 00:07:48.585 02:52:27 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:48.585 02:52:27 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:48.585 02:52:27 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:48.585 02:52:27 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:48.585 02:52:27 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:48.585 02:52:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:07:48.585 02:52:27 -- common/autotest_common.sh@10 -- # set +x 00:07:48.585 02:52:27 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:48.585 02:52:27 -- app/cmdline.sh@26 -- # sort 00:07:48.585 02:52:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:07:48.585 02:52:27 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:48.585 02:52:27 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:48.585 02:52:27 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.585 02:52:27 -- common/autotest_common.sh@638 -- # local es=0 00:07:48.585 02:52:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.585 02:52:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.585 02:52:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.585 02:52:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.585 02:52:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.585 02:52:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.585 02:52:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:48.585 02:52:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:48.585 02:52:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:48.585 02:52:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:48.844 request: 00:07:48.844 { 00:07:48.844 "method": "env_dpdk_get_mem_stats", 00:07:48.844 "req_id": 1 00:07:48.844 } 00:07:48.844 Got JSON-RPC error response 00:07:48.844 response: 00:07:48.844 { 00:07:48.844 "code": -32601, 00:07:48.844 "message": "Method not found" 00:07:48.844 } 00:07:48.844 02:52:27 -- common/autotest_common.sh@641 -- # es=1 00:07:48.844 02:52:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:48.844 02:52:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:07:48.844 02:52:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:48.844 02:52:27 -- app/cmdline.sh@1 -- # killprocess 75734 00:07:48.844 02:52:27 -- common/autotest_common.sh@936 -- # '[' -z 75734 ']' 00:07:48.844 02:52:27 -- common/autotest_common.sh@940 -- # kill -0 75734 00:07:48.844 02:52:27 -- common/autotest_common.sh@941 -- # uname 00:07:48.844 02:52:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:48.844 02:52:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75734 00:07:48.844 killing process with pid 75734 00:07:48.844 02:52:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:48.844 02:52:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:48.844 02:52:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75734' 00:07:48.844 02:52:27 -- common/autotest_common.sh@955 -- # kill 75734 00:07:48.844 02:52:27 -- common/autotest_common.sh@960 -- # wait 75734 00:07:49.103 00:07:49.103 real 0m1.327s 00:07:49.103 user 0m1.800s 00:07:49.103 sys 0m0.315s 00:07:49.103 ************************************ 00:07:49.103 END TEST app_cmdline 00:07:49.103 ************************************ 00:07:49.103 02:52:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.103 02:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.103 02:52:28 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.103 02:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.103 02:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.103 02:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.362 ************************************ 00:07:49.362 START TEST version 00:07:49.362 ************************************ 00:07:49.362 02:52:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:49.362 * Looking for test storage... 00:07:49.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:49.362 02:52:28 -- app/version.sh@17 -- # get_header_version major 00:07:49.362 02:52:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.362 02:52:28 -- app/version.sh@14 -- # cut -f2 00:07:49.362 02:52:28 -- app/version.sh@14 -- # tr -d '"' 00:07:49.362 02:52:28 -- app/version.sh@17 -- # major=24 00:07:49.362 02:52:28 -- app/version.sh@18 -- # get_header_version minor 00:07:49.362 02:52:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.362 02:52:28 -- app/version.sh@14 -- # cut -f2 00:07:49.362 02:52:28 -- app/version.sh@14 -- # tr -d '"' 00:07:49.362 02:52:28 -- app/version.sh@18 -- # minor=5 00:07:49.362 02:52:28 -- app/version.sh@19 -- # get_header_version patch 00:07:49.362 02:52:28 -- app/version.sh@14 -- # cut -f2 00:07:49.362 02:52:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.362 02:52:28 -- app/version.sh@14 -- # tr -d '"' 00:07:49.362 02:52:28 -- app/version.sh@19 -- # patch=0 00:07:49.362 02:52:28 -- app/version.sh@20 -- # get_header_version suffix 00:07:49.362 02:52:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:49.362 02:52:28 -- app/version.sh@14 -- # cut -f2 00:07:49.362 02:52:28 -- app/version.sh@14 -- # tr -d '"' 00:07:49.362 02:52:28 -- app/version.sh@20 -- # suffix=-pre 00:07:49.362 02:52:28 -- app/version.sh@22 -- # version=24.5 00:07:49.362 02:52:28 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:49.362 02:52:28 -- app/version.sh@28 -- # version=24.5rc0 00:07:49.362 02:52:28 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:49.362 02:52:28 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:49.362 02:52:28 -- app/version.sh@30 -- # py_version=24.5rc0 00:07:49.362 02:52:28 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:07:49.362 00:07:49.362 real 0m0.157s 00:07:49.362 user 0m0.089s 00:07:49.362 sys 0m0.100s 00:07:49.362 ************************************ 00:07:49.362 END TEST version 00:07:49.362 ************************************ 00:07:49.362 02:52:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:49.362 02:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.362 02:52:28 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:07:49.362 02:52:28 -- spdk/autotest.sh@194 -- # uname -s 00:07:49.362 02:52:28 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:49.362 02:52:28 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:49.362 02:52:28 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:49.362 02:52:28 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:49.362 02:52:28 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:49.362 02:52:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:49.362 02:52:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:49.362 02:52:28 -- common/autotest_common.sh@10 -- # set +x 00:07:49.620 ************************************ 00:07:49.620 START TEST spdk_dd 00:07:49.620 ************************************ 00:07:49.620 02:52:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:49.620 * Looking for test storage... 00:07:49.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:49.620 02:52:28 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:49.620 02:52:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.620 02:52:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.620 02:52:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.621 02:52:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.621 02:52:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.621 02:52:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.621 02:52:28 -- paths/export.sh@5 -- # export PATH 00:07:49.621 02:52:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.621 02:52:28 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:49.879 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:49.879 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:49.879 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:49.879 02:52:29 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:49.879 02:52:29 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:49.879 02:52:29 -- scripts/common.sh@309 -- # local bdf bdfs 00:07:49.879 02:52:29 -- scripts/common.sh@310 -- # local nvmes 00:07:49.879 02:52:29 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:07:49.879 02:52:29 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:49.879 02:52:29 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:07:49.879 02:52:29 -- scripts/common.sh@295 -- # local bdf= 00:07:49.879 02:52:29 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:07:49.879 02:52:29 -- scripts/common.sh@230 -- # local class 00:07:49.879 02:52:29 -- scripts/common.sh@231 -- # local subclass 00:07:49.879 02:52:29 -- scripts/common.sh@232 -- # local progif 00:07:49.879 02:52:29 -- scripts/common.sh@233 -- # printf %02x 1 00:07:49.879 02:52:29 -- scripts/common.sh@233 -- # class=01 00:07:49.879 02:52:29 -- scripts/common.sh@234 -- # printf %02x 8 00:07:49.879 02:52:29 -- scripts/common.sh@234 -- # subclass=08 00:07:49.879 02:52:29 -- scripts/common.sh@235 -- # printf %02x 2 00:07:49.879 02:52:29 -- scripts/common.sh@235 -- # progif=02 00:07:49.879 02:52:29 -- scripts/common.sh@237 -- # hash lspci 00:07:49.879 02:52:29 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:07:49.879 02:52:29 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:07:49.879 02:52:29 -- scripts/common.sh@240 -- # grep -i -- -p02 00:07:49.879 02:52:29 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:49.879 02:52:29 -- scripts/common.sh@242 -- # tr -d '"' 00:07:50.138 02:52:29 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:50.138 02:52:29 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:07:50.138 02:52:29 -- scripts/common.sh@15 -- # local i 00:07:50.138 02:52:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:07:50.138 02:52:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:50.138 02:52:29 -- scripts/common.sh@24 -- # return 0 00:07:50.138 02:52:29 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:07:50.138 02:52:29 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:50.138 02:52:29 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:07:50.138 02:52:29 -- scripts/common.sh@15 -- # local i 00:07:50.138 02:52:29 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:07:50.138 02:52:29 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:50.138 02:52:29 -- scripts/common.sh@24 -- # return 0 00:07:50.138 02:52:29 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:07:50.138 02:52:29 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:50.138 02:52:29 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:50.138 02:52:29 -- scripts/common.sh@320 -- # uname -s 00:07:50.138 02:52:29 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:50.138 02:52:29 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:50.138 02:52:29 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:07:50.138 02:52:29 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:50.138 02:52:29 -- scripts/common.sh@320 -- # uname -s 00:07:50.138 02:52:29 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:07:50.138 02:52:29 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:07:50.138 02:52:29 -- scripts/common.sh@325 -- # (( 2 )) 00:07:50.138 02:52:29 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:50.138 02:52:29 -- dd/dd.sh@13 -- # check_liburing 00:07:50.138 02:52:29 -- dd/common.sh@139 -- # local lib so 00:07:50.138 02:52:29 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:50.138 02:52:29 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:50.138 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.138 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.13.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.14.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.1.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event.so.13.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.15.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_accel.so.15.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_dma.so.4.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_sock.so.9.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_init.so.5.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_trace.so.10.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_keyring.so.1.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_util.so.9.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:50.139 02:52:29 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:50.139 02:52:29 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:50.139 * spdk_dd linked to liburing 00:07:50.139 02:52:29 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:50.139 02:52:29 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:50.139 02:52:29 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:50.139 02:52:29 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:50.139 02:52:29 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:50.139 02:52:29 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:50.139 02:52:29 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:50.139 02:52:29 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:50.139 02:52:29 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:50.139 02:52:29 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:50.139 02:52:29 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:50.139 02:52:29 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:50.139 02:52:29 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:50.139 02:52:29 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:50.139 02:52:29 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:50.139 02:52:29 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:50.139 02:52:29 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:50.139 02:52:29 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:50.139 02:52:29 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:50.139 02:52:29 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:50.140 02:52:29 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:50.140 02:52:29 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:50.140 02:52:29 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:50.140 02:52:29 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:50.140 02:52:29 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:50.140 02:52:29 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:50.140 02:52:29 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:50.140 02:52:29 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:50.140 02:52:29 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:50.140 02:52:29 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:50.140 02:52:29 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:50.140 02:52:29 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:50.140 02:52:29 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:50.140 02:52:29 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:50.140 02:52:29 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:50.140 02:52:29 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:50.140 02:52:29 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:50.140 02:52:29 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:50.140 02:52:29 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:50.140 02:52:29 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:50.140 02:52:29 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:50.140 02:52:29 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:50.140 02:52:29 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:50.140 02:52:29 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:50.140 02:52:29 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:50.140 02:52:29 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:50.140 02:52:29 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:50.140 02:52:29 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:50.140 02:52:29 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:50.140 02:52:29 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:50.140 02:52:29 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:50.140 02:52:29 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:50.140 02:52:29 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:50.140 02:52:29 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:50.140 02:52:29 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:07:50.140 02:52:29 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=y 00:07:50.140 02:52:29 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:07:50.140 02:52:29 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:07:50.140 02:52:29 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:07:50.140 02:52:29 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:07:50.140 02:52:29 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:07:50.140 02:52:29 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:07:50.140 02:52:29 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:07:50.140 02:52:29 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:50.140 02:52:29 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:07:50.140 02:52:29 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:07:50.140 02:52:29 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:07:50.140 02:52:29 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:07:50.140 02:52:29 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:07:50.140 02:52:29 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:50.140 02:52:29 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:07:50.140 02:52:29 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:07:50.140 02:52:29 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:07:50.140 02:52:29 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:07:50.140 02:52:29 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:07:50.140 02:52:29 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:07:50.140 02:52:29 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:07:50.140 02:52:29 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:07:50.140 02:52:29 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:07:50.140 02:52:29 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:07:50.140 02:52:29 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:07:50.140 02:52:29 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:50.140 02:52:29 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:07:50.140 02:52:29 -- common/build_config.sh@82 -- # CONFIG_URING=y 00:07:50.140 02:52:29 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:50.140 02:52:29 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:50.140 02:52:29 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:50.140 02:52:29 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:50.140 02:52:29 -- dd/common.sh@157 -- # return 0 00:07:50.140 02:52:29 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:50.140 02:52:29 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:50.140 02:52:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:50.140 02:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.140 02:52:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.140 ************************************ 00:07:50.140 START TEST spdk_dd_basic_rw 00:07:50.140 ************************************ 00:07:50.140 02:52:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:50.140 * Looking for test storage... 00:07:50.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:50.400 02:52:29 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:50.400 02:52:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.400 02:52:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.400 02:52:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.400 02:52:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.400 02:52:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.400 02:52:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.400 02:52:29 -- paths/export.sh@5 -- # export PATH 00:07:50.400 02:52:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.400 02:52:29 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:50.400 02:52:29 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:50.400 02:52:29 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:50.400 02:52:29 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:50.400 02:52:29 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:50.400 02:52:29 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:50.400 02:52:29 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:50.400 02:52:29 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:50.400 02:52:29 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.400 02:52:29 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:50.400 02:52:29 -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:50.400 02:52:29 -- dd/common.sh@126 -- # mapfile -t id 00:07:50.400 02:52:29 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:50.401 02:52:29 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:50.401 02:52:29 -- dd/common.sh@130 -- # lbaf=04 00:07:50.401 02:52:29 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 57 Data Units Written: 3 Host Read Commands: 1329 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:50.401 02:52:29 -- dd/common.sh@132 -- # lbaf=4096 00:07:50.401 02:52:29 -- dd/common.sh@134 -- # echo 4096 00:07:50.401 02:52:29 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:50.401 02:52:29 -- dd/basic_rw.sh@96 -- # : 00:07:50.401 02:52:29 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.401 02:52:29 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:50.401 02:52:29 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:50.401 02:52:29 -- dd/common.sh@31 -- # xtrace_disable 00:07:50.401 02:52:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.401 02:52:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.401 02:52:29 -- common/autotest_common.sh@10 -- # set +x 00:07:50.660 { 00:07:50.660 "subsystems": [ 00:07:50.660 { 00:07:50.660 "subsystem": "bdev", 00:07:50.660 "config": [ 00:07:50.660 { 00:07:50.660 "params": { 00:07:50.661 "trtype": "pcie", 00:07:50.661 "traddr": "0000:00:10.0", 00:07:50.661 "name": "Nvme0" 00:07:50.661 }, 00:07:50.661 "method": "bdev_nvme_attach_controller" 00:07:50.661 }, 00:07:50.661 { 00:07:50.661 "method": "bdev_wait_for_examine" 00:07:50.661 } 00:07:50.661 ] 00:07:50.661 } 00:07:50.661 ] 00:07:50.661 } 00:07:50.661 ************************************ 00:07:50.661 START TEST dd_bs_lt_native_bs 00:07:50.661 ************************************ 00:07:50.661 02:52:29 -- common/autotest_common.sh@1111 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.661 02:52:29 -- common/autotest_common.sh@638 -- # local es=0 00:07:50.661 02:52:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.661 02:52:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.661 02:52:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.661 02:52:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.661 02:52:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.661 02:52:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.661 02:52:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:07:50.661 02:52:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:50.661 02:52:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:50.661 02:52:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:50.661 [2024-04-23 02:52:29.652391] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:50.661 [2024-04-23 02:52:29.652797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76071 ] 00:07:50.661 [2024-04-23 02:52:29.773747] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:50.661 [2024-04-23 02:52:29.793193] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.919 [2024-04-23 02:52:29.833260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.919 [2024-04-23 02:52:29.948582] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:50.919 [2024-04-23 02:52:29.948649] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.919 [2024-04-23 02:52:30.020010] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:51.177 02:52:30 -- common/autotest_common.sh@641 -- # es=234 00:07:51.177 02:52:30 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:07:51.177 ************************************ 00:07:51.177 END TEST dd_bs_lt_native_bs 00:07:51.177 ************************************ 00:07:51.177 02:52:30 -- common/autotest_common.sh@650 -- # es=106 00:07:51.177 02:52:30 -- common/autotest_common.sh@651 -- # case "$es" in 00:07:51.177 02:52:30 -- common/autotest_common.sh@658 -- # es=1 00:07:51.177 02:52:30 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:07:51.177 00:07:51.177 real 0m0.500s 00:07:51.177 user 0m0.275s 00:07:51.177 sys 0m0.116s 00:07:51.177 02:52:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:51.177 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.177 02:52:30 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:51.177 02:52:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:51.177 02:52:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:51.177 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.177 ************************************ 00:07:51.177 START TEST dd_rw 00:07:51.177 ************************************ 00:07:51.177 02:52:30 -- common/autotest_common.sh@1111 -- # basic_rw 4096 00:07:51.177 02:52:30 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:51.177 02:52:30 -- dd/basic_rw.sh@12 -- # local count size 00:07:51.177 02:52:30 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:51.177 02:52:30 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:51.177 02:52:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.177 02:52:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.177 02:52:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.177 02:52:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.177 02:52:30 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:51.177 02:52:30 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:51.177 02:52:30 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:51.177 02:52:30 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:51.177 02:52:30 -- dd/basic_rw.sh@23 -- # count=15 00:07:51.177 02:52:30 -- dd/basic_rw.sh@24 -- # count=15 00:07:51.177 02:52:30 -- dd/basic_rw.sh@25 -- # size=61440 00:07:51.177 02:52:30 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:51.177 02:52:30 -- dd/common.sh@98 -- # xtrace_disable 00:07:51.177 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:51.743 02:52:30 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:51.744 02:52:30 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:51.744 02:52:30 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.744 02:52:30 -- common/autotest_common.sh@10 -- # set +x 00:07:52.002 [2024-04-23 02:52:30.931946] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:52.002 [2024-04-23 02:52:30.932054] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76106 ] 00:07:52.002 { 00:07:52.002 "subsystems": [ 00:07:52.002 { 00:07:52.002 "subsystem": "bdev", 00:07:52.002 "config": [ 00:07:52.002 { 00:07:52.002 "params": { 00:07:52.002 "trtype": "pcie", 00:07:52.002 "traddr": "0000:00:10.0", 00:07:52.002 "name": "Nvme0" 00:07:52.002 }, 00:07:52.002 "method": "bdev_nvme_attach_controller" 00:07:52.002 }, 00:07:52.002 { 00:07:52.002 "method": "bdev_wait_for_examine" 00:07:52.002 } 00:07:52.002 ] 00:07:52.002 } 00:07:52.002 ] 00:07:52.002 } 00:07:52.002 [2024-04-23 02:52:31.053400] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.002 [2024-04-23 02:52:31.070111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.002 [2024-04-23 02:52:31.109053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.263  Copying: 60/60 [kB] (average 29 MBps) 00:07:52.263 00:07:52.263 02:52:31 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:52.263 02:52:31 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:52.263 02:52:31 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.263 02:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.263 { 00:07:52.263 "subsystems": [ 00:07:52.263 { 00:07:52.263 "subsystem": "bdev", 00:07:52.263 "config": [ 00:07:52.263 { 00:07:52.263 "params": { 00:07:52.263 "trtype": "pcie", 00:07:52.263 "traddr": "0000:00:10.0", 00:07:52.263 "name": "Nvme0" 00:07:52.263 }, 00:07:52.263 "method": "bdev_nvme_attach_controller" 00:07:52.263 }, 00:07:52.263 { 00:07:52.263 "method": "bdev_wait_for_examine" 00:07:52.263 } 00:07:52.263 ] 00:07:52.263 } 00:07:52.263 ] 00:07:52.263 } 00:07:52.263 [2024-04-23 02:52:31.407936] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:52.263 [2024-04-23 02:52:31.408030] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76125 ] 00:07:52.528 [2024-04-23 02:52:31.528525] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:52.528 [2024-04-23 02:52:31.545339] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.528 [2024-04-23 02:52:31.577319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.787  Copying: 60/60 [kB] (average 29 MBps) 00:07:52.787 00:07:52.787 02:52:31 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.787 02:52:31 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:52.787 02:52:31 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.787 02:52:31 -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.787 02:52:31 -- dd/common.sh@12 -- # local size=61440 00:07:52.787 02:52:31 -- dd/common.sh@14 -- # local bs=1048576 00:07:52.787 02:52:31 -- dd/common.sh@15 -- # local count=1 00:07:52.787 02:52:31 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.787 02:52:31 -- dd/common.sh@18 -- # gen_conf 00:07:52.787 02:52:31 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.787 02:52:31 -- common/autotest_common.sh@10 -- # set +x 00:07:52.787 [2024-04-23 02:52:31.877122] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:52.787 [2024-04-23 02:52:31.877233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76135 ] 00:07:52.787 { 00:07:52.787 "subsystems": [ 00:07:52.787 { 00:07:52.787 "subsystem": "bdev", 00:07:52.787 "config": [ 00:07:52.787 { 00:07:52.787 "params": { 00:07:52.787 "trtype": "pcie", 00:07:52.787 "traddr": "0000:00:10.0", 00:07:52.787 "name": "Nvme0" 00:07:52.787 }, 00:07:52.787 "method": "bdev_nvme_attach_controller" 00:07:52.787 }, 00:07:52.787 { 00:07:52.787 "method": "bdev_wait_for_examine" 00:07:52.787 } 00:07:52.787 ] 00:07:52.787 } 00:07:52.787 ] 00:07:52.787 } 00:07:53.046 [2024-04-23 02:52:31.995417] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:53.047 [2024-04-23 02:52:32.012477] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.047 [2024-04-23 02:52:32.046906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.306  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:53.306 00:07:53.306 02:52:32 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:53.306 02:52:32 -- dd/basic_rw.sh@23 -- # count=15 00:07:53.306 02:52:32 -- dd/basic_rw.sh@24 -- # count=15 00:07:53.306 02:52:32 -- dd/basic_rw.sh@25 -- # size=61440 00:07:53.306 02:52:32 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:53.306 02:52:32 -- dd/common.sh@98 -- # xtrace_disable 00:07:53.306 02:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.874 02:52:32 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:53.874 02:52:32 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:53.874 02:52:32 -- dd/common.sh@31 -- # xtrace_disable 00:07:53.874 02:52:32 -- common/autotest_common.sh@10 -- # set +x 00:07:53.874 [2024-04-23 02:52:32.949457] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:53.874 [2024-04-23 02:52:32.949570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76154 ] 00:07:53.874 { 00:07:53.874 "subsystems": [ 00:07:53.874 { 00:07:53.874 "subsystem": "bdev", 00:07:53.874 "config": [ 00:07:53.874 { 00:07:53.874 "params": { 00:07:53.874 "trtype": "pcie", 00:07:53.874 "traddr": "0000:00:10.0", 00:07:53.874 "name": "Nvme0" 00:07:53.874 }, 00:07:53.874 "method": "bdev_nvme_attach_controller" 00:07:53.874 }, 00:07:53.874 { 00:07:53.874 "method": "bdev_wait_for_examine" 00:07:53.874 } 00:07:53.874 ] 00:07:53.874 } 00:07:53.874 ] 00:07:53.874 } 00:07:54.134 [2024-04-23 02:52:33.070195] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.134 [2024-04-23 02:52:33.090266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.134 [2024-04-23 02:52:33.124769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.393  Copying: 60/60 [kB] (average 58 MBps) 00:07:54.393 00:07:54.393 02:52:33 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:54.393 02:52:33 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:54.393 02:52:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.393 02:52:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.393 [2024-04-23 02:52:33.393461] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:54.393 [2024-04-23 02:52:33.393537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76173 ] 00:07:54.393 { 00:07:54.393 "subsystems": [ 00:07:54.393 { 00:07:54.393 "subsystem": "bdev", 00:07:54.393 "config": [ 00:07:54.393 { 00:07:54.393 "params": { 00:07:54.393 "trtype": "pcie", 00:07:54.393 "traddr": "0000:00:10.0", 00:07:54.393 "name": "Nvme0" 00:07:54.393 }, 00:07:54.393 "method": "bdev_nvme_attach_controller" 00:07:54.393 }, 00:07:54.393 { 00:07:54.393 "method": "bdev_wait_for_examine" 00:07:54.393 } 00:07:54.393 ] 00:07:54.393 } 00:07:54.393 ] 00:07:54.393 } 00:07:54.393 [2024-04-23 02:52:33.508419] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.393 [2024-04-23 02:52:33.523976] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.652 [2024-04-23 02:52:33.554621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.652  Copying: 60/60 [kB] (average 29 MBps) 00:07:54.652 00:07:54.652 02:52:33 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.652 02:52:33 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:54.652 02:52:33 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:54.652 02:52:33 -- dd/common.sh@11 -- # local nvme_ref= 00:07:54.652 02:52:33 -- dd/common.sh@12 -- # local size=61440 00:07:54.652 02:52:33 -- dd/common.sh@14 -- # local bs=1048576 00:07:54.652 02:52:33 -- dd/common.sh@15 -- # local count=1 00:07:54.652 02:52:33 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:54.652 02:52:33 -- dd/common.sh@18 -- # gen_conf 00:07:54.652 02:52:33 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.652 02:52:33 -- common/autotest_common.sh@10 -- # set +x 00:07:54.911 [2024-04-23 02:52:33.827541] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:54.911 [2024-04-23 02:52:33.827629] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76184 ] 00:07:54.911 { 00:07:54.911 "subsystems": [ 00:07:54.911 { 00:07:54.911 "subsystem": "bdev", 00:07:54.911 "config": [ 00:07:54.911 { 00:07:54.911 "params": { 00:07:54.911 "trtype": "pcie", 00:07:54.911 "traddr": "0000:00:10.0", 00:07:54.911 "name": "Nvme0" 00:07:54.911 }, 00:07:54.911 "method": "bdev_nvme_attach_controller" 00:07:54.911 }, 00:07:54.911 { 00:07:54.911 "method": "bdev_wait_for_examine" 00:07:54.911 } 00:07:54.911 ] 00:07:54.911 } 00:07:54.911 ] 00:07:54.911 } 00:07:54.911 [2024-04-23 02:52:33.942490] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:54.911 [2024-04-23 02:52:33.960584] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.911 [2024-04-23 02:52:33.993851] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.170  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:55.170 00:07:55.170 02:52:34 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:55.170 02:52:34 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:55.170 02:52:34 -- dd/basic_rw.sh@23 -- # count=7 00:07:55.170 02:52:34 -- dd/basic_rw.sh@24 -- # count=7 00:07:55.170 02:52:34 -- dd/basic_rw.sh@25 -- # size=57344 00:07:55.170 02:52:34 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:55.170 02:52:34 -- dd/common.sh@98 -- # xtrace_disable 00:07:55.170 02:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:55.737 02:52:34 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:55.737 02:52:34 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:55.737 02:52:34 -- dd/common.sh@31 -- # xtrace_disable 00:07:55.737 02:52:34 -- common/autotest_common.sh@10 -- # set +x 00:07:55.737 [2024-04-23 02:52:34.835470] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:55.737 [2024-04-23 02:52:34.835744] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76203 ] 00:07:55.737 { 00:07:55.737 "subsystems": [ 00:07:55.737 { 00:07:55.737 "subsystem": "bdev", 00:07:55.737 "config": [ 00:07:55.737 { 00:07:55.737 "params": { 00:07:55.737 "trtype": "pcie", 00:07:55.737 "traddr": "0000:00:10.0", 00:07:55.737 "name": "Nvme0" 00:07:55.737 }, 00:07:55.737 "method": "bdev_nvme_attach_controller" 00:07:55.737 }, 00:07:55.737 { 00:07:55.737 "method": "bdev_wait_for_examine" 00:07:55.737 } 00:07:55.737 ] 00:07:55.737 } 00:07:55.737 ] 00:07:55.737 } 00:07:55.997 [2024-04-23 02:52:34.957008] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:55.997 [2024-04-23 02:52:34.974838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.997 [2024-04-23 02:52:35.005426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.256  Copying: 56/56 [kB] (average 27 MBps) 00:07:56.256 00:07:56.256 02:52:35 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:56.256 02:52:35 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:56.256 02:52:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.256 02:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.256 [2024-04-23 02:52:35.291450] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:56.256 [2024-04-23 02:52:35.291543] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76217 ] 00:07:56.256 { 00:07:56.256 "subsystems": [ 00:07:56.256 { 00:07:56.256 "subsystem": "bdev", 00:07:56.256 "config": [ 00:07:56.256 { 00:07:56.256 "params": { 00:07:56.256 "trtype": "pcie", 00:07:56.256 "traddr": "0000:00:10.0", 00:07:56.256 "name": "Nvme0" 00:07:56.256 }, 00:07:56.256 "method": "bdev_nvme_attach_controller" 00:07:56.256 }, 00:07:56.256 { 00:07:56.256 "method": "bdev_wait_for_examine" 00:07:56.256 } 00:07:56.256 ] 00:07:56.256 } 00:07:56.256 ] 00:07:56.256 } 00:07:56.256 [2024-04-23 02:52:35.411543] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.515 [2024-04-23 02:52:35.428630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.515 [2024-04-23 02:52:35.461752] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.774  Copying: 56/56 [kB] (average 54 MBps) 00:07:56.774 00:07:56.774 02:52:35 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.774 02:52:35 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:56.774 02:52:35 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:56.774 02:52:35 -- dd/common.sh@11 -- # local nvme_ref= 00:07:56.774 02:52:35 -- dd/common.sh@12 -- # local size=57344 00:07:56.774 02:52:35 -- dd/common.sh@14 -- # local bs=1048576 00:07:56.774 02:52:35 -- dd/common.sh@15 -- # local count=1 00:07:56.774 02:52:35 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:56.774 02:52:35 -- dd/common.sh@18 -- # gen_conf 00:07:56.774 02:52:35 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.774 02:52:35 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-04-23 02:52:35.768442] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:56.774 [2024-04-23 02:52:35.768541] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76232 ] 00:07:56.774 { 00:07:56.774 "subsystems": [ 00:07:56.774 { 00:07:56.774 "subsystem": "bdev", 00:07:56.774 "config": [ 00:07:56.774 { 00:07:56.774 "params": { 00:07:56.774 "trtype": "pcie", 00:07:56.774 "traddr": "0000:00:10.0", 00:07:56.774 "name": "Nvme0" 00:07:56.774 }, 00:07:56.774 "method": "bdev_nvme_attach_controller" 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "method": "bdev_wait_for_examine" 00:07:56.774 } 00:07:56.774 ] 00:07:56.774 } 00:07:56.774 ] 00:07:56.774 } 00:07:56.774 [2024-04-23 02:52:35.890287] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:56.774 [2024-04-23 02:52:35.906363] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.034 [2024-04-23 02:52:35.937671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.034  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:57.034 00:07:57.293 02:52:36 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:57.293 02:52:36 -- dd/basic_rw.sh@23 -- # count=7 00:07:57.293 02:52:36 -- dd/basic_rw.sh@24 -- # count=7 00:07:57.293 02:52:36 -- dd/basic_rw.sh@25 -- # size=57344 00:07:57.293 02:52:36 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:57.293 02:52:36 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.293 02:52:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.860 02:52:36 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:57.860 02:52:36 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:57.860 02:52:36 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.860 02:52:36 -- common/autotest_common.sh@10 -- # set +x 00:07:57.860 [2024-04-23 02:52:36.801355] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:57.860 [2024-04-23 02:52:36.801648] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76251 ] 00:07:57.860 { 00:07:57.860 "subsystems": [ 00:07:57.860 { 00:07:57.860 "subsystem": "bdev", 00:07:57.860 "config": [ 00:07:57.860 { 00:07:57.860 "params": { 00:07:57.860 "trtype": "pcie", 00:07:57.860 "traddr": "0000:00:10.0", 00:07:57.860 "name": "Nvme0" 00:07:57.860 }, 00:07:57.860 "method": "bdev_nvme_attach_controller" 00:07:57.860 }, 00:07:57.860 { 00:07:57.860 "method": "bdev_wait_for_examine" 00:07:57.860 } 00:07:57.860 ] 00:07:57.860 } 00:07:57.860 ] 00:07:57.860 } 00:07:57.860 [2024-04-23 02:52:36.923267] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:57.860 [2024-04-23 02:52:36.942260] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.860 [2024-04-23 02:52:36.973760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.118  Copying: 56/56 [kB] (average 54 MBps) 00:07:58.118 00:07:58.118 02:52:37 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:58.118 02:52:37 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:58.118 02:52:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.118 02:52:37 -- common/autotest_common.sh@10 -- # set +x 00:07:58.118 [2024-04-23 02:52:37.263536] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:58.118 [2024-04-23 02:52:37.263628] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76267 ] 00:07:58.377 { 00:07:58.377 "subsystems": [ 00:07:58.377 { 00:07:58.377 "subsystem": "bdev", 00:07:58.377 "config": [ 00:07:58.377 { 00:07:58.377 "params": { 00:07:58.377 "trtype": "pcie", 00:07:58.377 "traddr": "0000:00:10.0", 00:07:58.378 "name": "Nvme0" 00:07:58.378 }, 00:07:58.378 "method": "bdev_nvme_attach_controller" 00:07:58.378 }, 00:07:58.378 { 00:07:58.378 "method": "bdev_wait_for_examine" 00:07:58.378 } 00:07:58.378 ] 00:07:58.378 } 00:07:58.378 ] 00:07:58.378 } 00:07:58.378 [2024-04-23 02:52:37.380123] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.378 [2024-04-23 02:52:37.396778] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.378 [2024-04-23 02:52:37.435316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.636  Copying: 56/56 [kB] (average 54 MBps) 00:07:58.636 00:07:58.636 02:52:37 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.636 02:52:37 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:58.636 02:52:37 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:58.636 02:52:37 -- dd/common.sh@11 -- # local nvme_ref= 00:07:58.636 02:52:37 -- dd/common.sh@12 -- # local size=57344 00:07:58.636 02:52:37 -- dd/common.sh@14 -- # local bs=1048576 00:07:58.636 02:52:37 -- dd/common.sh@15 -- # local count=1 00:07:58.636 02:52:37 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:58.636 02:52:37 -- dd/common.sh@18 -- # gen_conf 00:07:58.636 02:52:37 -- dd/common.sh@31 -- # xtrace_disable 00:07:58.636 02:52:37 -- common/autotest_common.sh@10 -- # set +x 00:07:58.636 [2024-04-23 02:52:37.728852] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:58.636 [2024-04-23 02:52:37.728961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76282 ] 00:07:58.636 { 00:07:58.636 "subsystems": [ 00:07:58.636 { 00:07:58.636 "subsystem": "bdev", 00:07:58.636 "config": [ 00:07:58.636 { 00:07:58.636 "params": { 00:07:58.636 "trtype": "pcie", 00:07:58.636 "traddr": "0000:00:10.0", 00:07:58.636 "name": "Nvme0" 00:07:58.636 }, 00:07:58.636 "method": "bdev_nvme_attach_controller" 00:07:58.636 }, 00:07:58.636 { 00:07:58.637 "method": "bdev_wait_for_examine" 00:07:58.637 } 00:07:58.637 ] 00:07:58.637 } 00:07:58.637 ] 00:07:58.637 } 00:07:58.896 [2024-04-23 02:52:37.850037] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:58.896 [2024-04-23 02:52:37.864101] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.896 [2024-04-23 02:52:37.900039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.155  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:59.155 00:07:59.155 02:52:38 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:59.155 02:52:38 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:59.155 02:52:38 -- dd/basic_rw.sh@23 -- # count=3 00:07:59.155 02:52:38 -- dd/basic_rw.sh@24 -- # count=3 00:07:59.155 02:52:38 -- dd/basic_rw.sh@25 -- # size=49152 00:07:59.155 02:52:38 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:59.155 02:52:38 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.155 02:52:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.414 02:52:38 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:59.414 02:52:38 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:59.414 02:52:38 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.414 02:52:38 -- common/autotest_common.sh@10 -- # set +x 00:07:59.686 [2024-04-23 02:52:38.634870] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:59.686 [2024-04-23 02:52:38.635027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76296 ] 00:07:59.686 { 00:07:59.686 "subsystems": [ 00:07:59.686 { 00:07:59.686 "subsystem": "bdev", 00:07:59.686 "config": [ 00:07:59.686 { 00:07:59.686 "params": { 00:07:59.686 "trtype": "pcie", 00:07:59.686 "traddr": "0000:00:10.0", 00:07:59.686 "name": "Nvme0" 00:07:59.686 }, 00:07:59.686 "method": "bdev_nvme_attach_controller" 00:07:59.686 }, 00:07:59.686 { 00:07:59.686 "method": "bdev_wait_for_examine" 00:07:59.686 } 00:07:59.686 ] 00:07:59.686 } 00:07:59.686 ] 00:07:59.686 } 00:07:59.686 [2024-04-23 02:52:38.759295] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:07:59.686 [2024-04-23 02:52:38.776608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.686 [2024-04-23 02:52:38.819399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.983  Copying: 48/48 [kB] (average 46 MBps) 00:07:59.983 00:07:59.983 02:52:39 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:59.983 02:52:39 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:59.983 02:52:39 -- dd/common.sh@31 -- # xtrace_disable 00:07:59.983 02:52:39 -- common/autotest_common.sh@10 -- # set +x 00:07:59.983 [2024-04-23 02:52:39.126728] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:07:59.983 [2024-04-23 02:52:39.126824] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76309 ] 00:07:59.983 { 00:07:59.983 "subsystems": [ 00:07:59.984 { 00:07:59.984 "subsystem": "bdev", 00:07:59.984 "config": [ 00:07:59.984 { 00:07:59.984 "params": { 00:07:59.984 "trtype": "pcie", 00:07:59.984 "traddr": "0000:00:10.0", 00:07:59.984 "name": "Nvme0" 00:07:59.984 }, 00:07:59.984 "method": "bdev_nvme_attach_controller" 00:07:59.984 }, 00:07:59.984 { 00:07:59.984 "method": "bdev_wait_for_examine" 00:07:59.984 } 00:07:59.984 ] 00:07:59.984 } 00:07:59.984 ] 00:07:59.984 } 00:08:00.243 [2024-04-23 02:52:39.247272] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.243 [2024-04-23 02:52:39.265333] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.243 [2024-04-23 02:52:39.300873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.502  Copying: 48/48 [kB] (average 46 MBps) 00:08:00.502 00:08:00.502 02:52:39 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.502 02:52:39 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:00.502 02:52:39 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.502 02:52:39 -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.502 02:52:39 -- dd/common.sh@12 -- # local size=49152 00:08:00.502 02:52:39 -- dd/common.sh@14 -- # local bs=1048576 00:08:00.502 02:52:39 -- dd/common.sh@15 -- # local count=1 00:08:00.502 02:52:39 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.502 02:52:39 -- dd/common.sh@18 -- # gen_conf 00:08:00.502 02:52:39 -- dd/common.sh@31 -- # xtrace_disable 00:08:00.502 02:52:39 -- common/autotest_common.sh@10 -- # set +x 00:08:00.502 [2024-04-23 02:52:39.589461] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:00.503 [2024-04-23 02:52:39.589559] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76325 ] 00:08:00.503 { 00:08:00.503 "subsystems": [ 00:08:00.503 { 00:08:00.503 "subsystem": "bdev", 00:08:00.503 "config": [ 00:08:00.503 { 00:08:00.503 "params": { 00:08:00.503 "trtype": "pcie", 00:08:00.503 "traddr": "0000:00:10.0", 00:08:00.503 "name": "Nvme0" 00:08:00.503 }, 00:08:00.503 "method": "bdev_nvme_attach_controller" 00:08:00.503 }, 00:08:00.503 { 00:08:00.503 "method": "bdev_wait_for_examine" 00:08:00.503 } 00:08:00.503 ] 00:08:00.503 } 00:08:00.503 ] 00:08:00.503 } 00:08:00.762 [2024-04-23 02:52:39.709736] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:00.762 [2024-04-23 02:52:39.728678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.762 [2024-04-23 02:52:39.767610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.020  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:01.020 00:08:01.020 02:52:40 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:08:01.020 02:52:40 -- dd/basic_rw.sh@23 -- # count=3 00:08:01.020 02:52:40 -- dd/basic_rw.sh@24 -- # count=3 00:08:01.020 02:52:40 -- dd/basic_rw.sh@25 -- # size=49152 00:08:01.020 02:52:40 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:08:01.020 02:52:40 -- dd/common.sh@98 -- # xtrace_disable 00:08:01.020 02:52:40 -- common/autotest_common.sh@10 -- # set +x 00:08:01.279 02:52:40 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:08:01.279 02:52:40 -- dd/basic_rw.sh@30 -- # gen_conf 00:08:01.279 02:52:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.279 02:52:40 -- common/autotest_common.sh@10 -- # set +x 00:08:01.538 [2024-04-23 02:52:40.478417] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:01.538 [2024-04-23 02:52:40.478524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76344 ] 00:08:01.538 { 00:08:01.538 "subsystems": [ 00:08:01.538 { 00:08:01.538 "subsystem": "bdev", 00:08:01.538 "config": [ 00:08:01.538 { 00:08:01.538 "params": { 00:08:01.538 "trtype": "pcie", 00:08:01.538 "traddr": "0000:00:10.0", 00:08:01.538 "name": "Nvme0" 00:08:01.538 }, 00:08:01.538 "method": "bdev_nvme_attach_controller" 00:08:01.538 }, 00:08:01.538 { 00:08:01.538 "method": "bdev_wait_for_examine" 00:08:01.538 } 00:08:01.538 ] 00:08:01.538 } 00:08:01.538 ] 00:08:01.538 } 00:08:01.538 [2024-04-23 02:52:40.598954] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:01.538 [2024-04-23 02:52:40.617931] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.538 [2024-04-23 02:52:40.650929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.797  Copying: 48/48 [kB] (average 46 MBps) 00:08:01.797 00:08:01.797 02:52:40 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:08:01.797 02:52:40 -- dd/basic_rw.sh@37 -- # gen_conf 00:08:01.797 02:52:40 -- dd/common.sh@31 -- # xtrace_disable 00:08:01.797 02:52:40 -- common/autotest_common.sh@10 -- # set +x 00:08:01.797 [2024-04-23 02:52:40.924103] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:01.797 [2024-04-23 02:52:40.924244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76357 ] 00:08:01.797 { 00:08:01.797 "subsystems": [ 00:08:01.797 { 00:08:01.797 "subsystem": "bdev", 00:08:01.797 "config": [ 00:08:01.797 { 00:08:01.797 "params": { 00:08:01.797 "trtype": "pcie", 00:08:01.797 "traddr": "0000:00:10.0", 00:08:01.797 "name": "Nvme0" 00:08:01.797 }, 00:08:01.797 "method": "bdev_nvme_attach_controller" 00:08:01.797 }, 00:08:01.797 { 00:08:01.798 "method": "bdev_wait_for_examine" 00:08:01.798 } 00:08:01.798 ] 00:08:01.798 } 00:08:01.798 ] 00:08:01.798 } 00:08:02.057 [2024-04-23 02:52:41.039111] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.057 [2024-04-23 02:52:41.052321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.057 [2024-04-23 02:52:41.082606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.315  Copying: 48/48 [kB] (average 46 MBps) 00:08:02.315 00:08:02.315 02:52:41 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.316 02:52:41 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:08:02.316 02:52:41 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:02.316 02:52:41 -- dd/common.sh@11 -- # local nvme_ref= 00:08:02.316 02:52:41 -- dd/common.sh@12 -- # local size=49152 00:08:02.316 02:52:41 -- dd/common.sh@14 -- # local bs=1048576 00:08:02.316 02:52:41 -- dd/common.sh@15 -- # local count=1 00:08:02.316 02:52:41 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:02.316 02:52:41 -- dd/common.sh@18 -- # gen_conf 00:08:02.316 02:52:41 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.316 02:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.316 [2024-04-23 02:52:41.385569] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:02.316 [2024-04-23 02:52:41.386082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76373 ] 00:08:02.316 { 00:08:02.316 "subsystems": [ 00:08:02.316 { 00:08:02.316 "subsystem": "bdev", 00:08:02.316 "config": [ 00:08:02.316 { 00:08:02.316 "params": { 00:08:02.316 "trtype": "pcie", 00:08:02.316 "traddr": "0000:00:10.0", 00:08:02.316 "name": "Nvme0" 00:08:02.316 }, 00:08:02.316 "method": "bdev_nvme_attach_controller" 00:08:02.316 }, 00:08:02.316 { 00:08:02.316 "method": "bdev_wait_for_examine" 00:08:02.316 } 00:08:02.316 ] 00:08:02.316 } 00:08:02.316 ] 00:08:02.316 } 00:08:02.574 [2024-04-23 02:52:41.506331] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:02.574 [2024-04-23 02:52:41.525435] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.574 [2024-04-23 02:52:41.555242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.832  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:02.832 00:08:02.832 00:08:02.832 real 0m11.581s 00:08:02.832 user 0m8.643s 00:08:02.832 sys 0m3.518s 00:08:02.832 02:52:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:02.832 02:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.832 ************************************ 00:08:02.832 END TEST dd_rw 00:08:02.832 ************************************ 00:08:02.832 02:52:41 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:08:02.832 02:52:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:02.832 02:52:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:02.832 02:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.832 ************************************ 00:08:02.832 START TEST dd_rw_offset 00:08:02.832 ************************************ 00:08:02.832 02:52:41 -- common/autotest_common.sh@1111 -- # basic_offset 00:08:02.832 02:52:41 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:08:02.832 02:52:41 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:08:02.832 02:52:41 -- dd/common.sh@98 -- # xtrace_disable 00:08:02.832 02:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:02.832 02:52:41 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:08:02.833 02:52:41 -- dd/basic_rw.sh@56 -- # data=gt2ijpiuv1jftxx3esva259wtf1rg44sjwae9re4zq3gqv1peqpspsbyoozpox8bt1ozc4bsdbhvihvmj00mwto98k9wiglfgd9o5hyu1ysqrlkmhtttequaerhjkudyk5q17c17388qtyhp96ssqodfalmjwzy4uex8gi31q7p7xpqjneua227zsl1g2ke91t0slj0ap58lzmjz5hkjgkne9fhj5mbg48uzjn6fahv91amdadv3huhuzzlzew2i2dv9z7uy932q6qcfb0cdmqnjr1xblcjl5bv3fnn862qsoxzh4926a2zo3ai8w5q3v3v5jmtro0gzkxrr3ak6msp13azakte9ybg64odxh4fw8z69xxigi8c96wa6a9wc2k5sm4skdeb741wxxm2w6c1l3p2t8vmmffzxeecodod6k19k6pvk5m4s60tceh37xe0h46tbvy8lz56vw8e7ik8piypevd6p69fbjtpt8qnvmz2m1ykqd13ogl431d5s03gjy1th1a4dagsf7i9anw1hj3pld9moerc4e2k5fjswqoy2f3iw95k0vg8m73y0q0a5j7246lmviajq1m39h6y218khslcnwggumtr8i6p6gu9tqixipyckrfoq9a133b6lzf177rdtcczb113f326qf049r3325kf7u3yuqv572fahog1x10gm96bb8akw5weacvyr9wtwxlqjavsv402xyh63sud4hrzrnabaxbc8pycandifsjovrvt9lwfjr55fi5yztrqc0mwiw9z6mpft8x5fvmd0x4cq345n27iqcqpwc54pz9s2ps4xq7m4xbzwcvs9b25h5d5t7seod8xmnfd961ktlt487mwf7k4cxtd4h4cvw9abam8tjjeotp3hvgsa4p7snv79qwix34ni7sqjn2klw7l88bppikx5ymbxxl6mpcclb7nm4m2803nwh3gpvsfpfpzlvxmm5ynb6ivjdyqaqs2b8oj6qlxk75ffxqogkm9cy3zgt8fw1vrhtou3uiint0mu0f2w9pbkeek5rrwlfqqe5s4nhqnxywh8i7nabvsvpfgx6sfytht605jrm16rz16nqb37myojfsh71xfuc209eqbunhjkow75vsgb0ooqkiw70ejbgwom1c6fqt22qe7j6h5sic7dsv2o7jdqdfjbx0p91voknpq4c9w1049j5inhema0kdurnn26ujtf4hja0rq0zn81pybj8cak231379faigsd9kr2h66fy3geocu4og9wsigqfrmnp1p0d5o68avfwstq0bz5kuvykugh1svqx0tipdgyms8h2033qrx9tx7s7990ik2o4ls2kjjz3i5j0d4ls1duvj541wpin7f26isp5lg003jnseail6tiy62apt1mbqe1de41c7nznar9dzak7hoqamtlutq947ihdt1f6rqn9dbx8mc8yc829peggqvwdkrsrpbj2l9u5mbm3cnylylc7xvhy9y77exqzfxx0y9qfuh7afayiij74cxwlgiont3quu25qmjyh6zx2ubf5hzuhrgbovi0ay8m21xcpray1fpwnnwr6b8x34y78rb0cdxtc3rzygy6398y6yx83h4w3zkvbrs7e3v1cxywk8jssfigmn1fml3x92qs67x1xypvja6myzqp1imwcu6mrcg413o74b5fzq96yi8fdiajbwmt9ljtiqo1we6k8403cqehk2ojufda8nc2ok4q6fj3vu0ah074lf13gr7tirga9huteee3ad2pek0vcz4pxpkwubds80nrv8d11hhlq48qdtd36j1gnhxvgkzxojiagsvcj8x2lfqj05x8wjulfszehnq2gxf3ubmid2jpwigwukj7akoc9xo3x69d0wnh0lel5qsziyjnj8fyn9uw3swfn2dhzjdf2yfiedu976ygq65bv9ggzvtc67ysm61dzbfhp3jgful8p1rijopfa4t7i3umu7237jt1muy4pd12q1hx5sxi8g61svgq47bggpmy3wa2njwdaq4tad20lq8age0qj9trbgyt7z7c4gokjammtt4xqbimmvvucn1je65so167088ui9r30m88diglyn1n219ed7qjz3xfwkvn0qv9toivr99kw276u0fdp3b882hony3tr5ug9z5les5qyp3x4ozoimwrd4l46bks0fzqmutyr8a7yy2t0c46wbtcd7wxbropxgt2x65qzsxfq2vs9hrsjgat9l81u4rsnwt46fjidn4uj52dq36caj6cgxs2ibovimrkj6f8yijmsa4rds6b1ln374eq7yn61rx5upc4aqob58cen4ovet7kr2hgtwa5ni5whaec9si8w0zfei63oy7q14oizfdc69t29ee5v3fic33yo6qfvxtybn77fxyo32lowpgub3a1yw4z7wkp2xlkctg76by33ce0lymcrg47gm8541avvcbzn7imef8givmz29vb71o7nbykye5bmzff8qpuvjpesfqbe8o7ukpfng3ph8ygmjt90ubwhx200xxcmdhswhve605oq17ypxezq39rel4282ddp1mhq7dl57yemdxwrx4ixx1eojrg4enawt6q2oh3o6jw3lt9rki5hejgfbnhesakwzoy89udi2slv8ur3xdg6bhdz5x2emu1ox1ibnro2vnio9i1bo1aqspj5r9xnwh42vd105p7c45rmsfg355ebrr39xlyhymxq0jbe5gfecbblzze8re4shbz9ckpugjyu4x7rrrzx94tb7rbgl43i01fdi6gc6caxqxi9ysnorj22k86vghkg7v16njjtcpjoael6o0ma5vfvsbm8mnli6qs7t34oiye16ps4bo2h086np8l3mpka44ky3k8bpo6u8h80h8x1yttvu14hq8ilf9ca5wjjelum7byyz7n5qtlzm3x7w79re5qvp6oy3unf5ktyn0fbgm8pbkmewn0c0d80c0k88xyl38k0lkl4tj5ynjcusdig1y1pwa8t55h1ybhbgvv3kpnas7szcglo5t7tz5tm32s54a7fas4hs70ynkdx3jzkdf6h70qwzgjvbmqlj0lbt2hsbdeqi7jbtnr72jyy1v8ys3265s8rddqrtmi7agqri5tleprbyt6noy1ev1lgoai0zvfznhdxzm5lxorkq3dyxlhjf55nmkzt49jkq4c208oqgf77vk9qgzzi2os3m9w5b6ky4yvcuhjm4ucljva0gmvhucq3le5oxts1nvmdh590qtoviope96nsncigeslqdw54lc4w21rhu1zkc3j2ddr9vcxip238jkacb602mcu84radsljwxt6k9ly983mrzee8espvhow3k9459c63lh9dlrze75pey7pvw5a510c539chupyz95dl9rf30kpofluivdgzdchmyiuqmkmqgamooam6eaus6x3xwbgp4xpw7ojsfexsxs702pasi39lkd2ys0vigb66e5pbpd5d7zz2enq5hut7f6ql9nma3ezozi7gw1ak4g6e5km7ca4h96xixnqhe4d7oa92pl85c8u82jqxubjv40z9z4m206vq2s6sfygo67rr693m5f519pqhofo838vq46cc721lcjlqifhno6urxeu22s8d28wmky00w7ngcmoyeftsipv57mjo5ffzigby4p4mb3enn2aw3rmbv0dl1wtox3w95f1im3owen5erb4legjb6k22bcunouqzprpw7brgl2wz2wdtg8ljyqq7sqp890pe5a5gxj5idqha3espahaofqhhbhyoxnjjrvk2y6gxarwa69kld31xnc79r5nhseo8pden0yih8ixita3odj4zgpr9aza65ifjy1gxy821gu1p0pmyjservk2ev9x0lnue2zbag9f1mvwozkyptffb4kom9f2oi7440fym0vbip9h412pz89b5t28ds9dpv8lo8ogsokuwghrfyzdu3216zxmcvvzq73zgiv2t2zith4w9b26kz94bmvsh6r0i0fsvwb29gnae16dv3hepmuhaxlzcupxmmd2ms79vg68lh5a 00:08:02.833 02:52:41 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:08:02.833 02:52:41 -- dd/basic_rw.sh@59 -- # gen_conf 00:08:02.833 02:52:41 -- dd/common.sh@31 -- # xtrace_disable 00:08:02.833 02:52:41 -- common/autotest_common.sh@10 -- # set +x 00:08:03.091 [2024-04-23 02:52:42.018637] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:03.091 [2024-04-23 02:52:42.018730] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76408 ] 00:08:03.091 { 00:08:03.091 "subsystems": [ 00:08:03.091 { 00:08:03.091 "subsystem": "bdev", 00:08:03.091 "config": [ 00:08:03.091 { 00:08:03.091 "params": { 00:08:03.091 "trtype": "pcie", 00:08:03.091 "traddr": "0000:00:10.0", 00:08:03.091 "name": "Nvme0" 00:08:03.091 }, 00:08:03.091 "method": "bdev_nvme_attach_controller" 00:08:03.091 }, 00:08:03.091 { 00:08:03.091 "method": "bdev_wait_for_examine" 00:08:03.091 } 00:08:03.091 ] 00:08:03.091 } 00:08:03.091 ] 00:08:03.091 } 00:08:03.091 [2024-04-23 02:52:42.140697] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.091 [2024-04-23 02:52:42.158231] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.091 [2024-04-23 02:52:42.187767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.350  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:03.350 00:08:03.350 02:52:42 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:08:03.350 02:52:42 -- dd/basic_rw.sh@65 -- # gen_conf 00:08:03.350 02:52:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.350 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.350 [2024-04-23 02:52:42.480243] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:03.350 [2024-04-23 02:52:42.480332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:08:03.350 { 00:08:03.350 "subsystems": [ 00:08:03.350 { 00:08:03.350 "subsystem": "bdev", 00:08:03.350 "config": [ 00:08:03.350 { 00:08:03.350 "params": { 00:08:03.350 "trtype": "pcie", 00:08:03.350 "traddr": "0000:00:10.0", 00:08:03.350 "name": "Nvme0" 00:08:03.350 }, 00:08:03.350 "method": "bdev_nvme_attach_controller" 00:08:03.350 }, 00:08:03.350 { 00:08:03.350 "method": "bdev_wait_for_examine" 00:08:03.350 } 00:08:03.350 ] 00:08:03.350 } 00:08:03.350 ] 00:08:03.350 } 00:08:03.608 [2024-04-23 02:52:42.601053] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:03.608 [2024-04-23 02:52:42.617946] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.608 [2024-04-23 02:52:42.647728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.868  Copying: 4096/4096 [B] (average 4000 kBps) 00:08:03.868 00:08:03.868 02:52:42 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:03.869 02:52:42 -- dd/basic_rw.sh@72 -- # [[ gt2ijpiuv1jftxx3esva259wtf1rg44sjwae9re4zq3gqv1peqpspsbyoozpox8bt1ozc4bsdbhvihvmj00mwto98k9wiglfgd9o5hyu1ysqrlkmhtttequaerhjkudyk5q17c17388qtyhp96ssqodfalmjwzy4uex8gi31q7p7xpqjneua227zsl1g2ke91t0slj0ap58lzmjz5hkjgkne9fhj5mbg48uzjn6fahv91amdadv3huhuzzlzew2i2dv9z7uy932q6qcfb0cdmqnjr1xblcjl5bv3fnn862qsoxzh4926a2zo3ai8w5q3v3v5jmtro0gzkxrr3ak6msp13azakte9ybg64odxh4fw8z69xxigi8c96wa6a9wc2k5sm4skdeb741wxxm2w6c1l3p2t8vmmffzxeecodod6k19k6pvk5m4s60tceh37xe0h46tbvy8lz56vw8e7ik8piypevd6p69fbjtpt8qnvmz2m1ykqd13ogl431d5s03gjy1th1a4dagsf7i9anw1hj3pld9moerc4e2k5fjswqoy2f3iw95k0vg8m73y0q0a5j7246lmviajq1m39h6y218khslcnwggumtr8i6p6gu9tqixipyckrfoq9a133b6lzf177rdtcczb113f326qf049r3325kf7u3yuqv572fahog1x10gm96bb8akw5weacvyr9wtwxlqjavsv402xyh63sud4hrzrnabaxbc8pycandifsjovrvt9lwfjr55fi5yztrqc0mwiw9z6mpft8x5fvmd0x4cq345n27iqcqpwc54pz9s2ps4xq7m4xbzwcvs9b25h5d5t7seod8xmnfd961ktlt487mwf7k4cxtd4h4cvw9abam8tjjeotp3hvgsa4p7snv79qwix34ni7sqjn2klw7l88bppikx5ymbxxl6mpcclb7nm4m2803nwh3gpvsfpfpzlvxmm5ynb6ivjdyqaqs2b8oj6qlxk75ffxqogkm9cy3zgt8fw1vrhtou3uiint0mu0f2w9pbkeek5rrwlfqqe5s4nhqnxywh8i7nabvsvpfgx6sfytht605jrm16rz16nqb37myojfsh71xfuc209eqbunhjkow75vsgb0ooqkiw70ejbgwom1c6fqt22qe7j6h5sic7dsv2o7jdqdfjbx0p91voknpq4c9w1049j5inhema0kdurnn26ujtf4hja0rq0zn81pybj8cak231379faigsd9kr2h66fy3geocu4og9wsigqfrmnp1p0d5o68avfwstq0bz5kuvykugh1svqx0tipdgyms8h2033qrx9tx7s7990ik2o4ls2kjjz3i5j0d4ls1duvj541wpin7f26isp5lg003jnseail6tiy62apt1mbqe1de41c7nznar9dzak7hoqamtlutq947ihdt1f6rqn9dbx8mc8yc829peggqvwdkrsrpbj2l9u5mbm3cnylylc7xvhy9y77exqzfxx0y9qfuh7afayiij74cxwlgiont3quu25qmjyh6zx2ubf5hzuhrgbovi0ay8m21xcpray1fpwnnwr6b8x34y78rb0cdxtc3rzygy6398y6yx83h4w3zkvbrs7e3v1cxywk8jssfigmn1fml3x92qs67x1xypvja6myzqp1imwcu6mrcg413o74b5fzq96yi8fdiajbwmt9ljtiqo1we6k8403cqehk2ojufda8nc2ok4q6fj3vu0ah074lf13gr7tirga9huteee3ad2pek0vcz4pxpkwubds80nrv8d11hhlq48qdtd36j1gnhxvgkzxojiagsvcj8x2lfqj05x8wjulfszehnq2gxf3ubmid2jpwigwukj7akoc9xo3x69d0wnh0lel5qsziyjnj8fyn9uw3swfn2dhzjdf2yfiedu976ygq65bv9ggzvtc67ysm61dzbfhp3jgful8p1rijopfa4t7i3umu7237jt1muy4pd12q1hx5sxi8g61svgq47bggpmy3wa2njwdaq4tad20lq8age0qj9trbgyt7z7c4gokjammtt4xqbimmvvucn1je65so167088ui9r30m88diglyn1n219ed7qjz3xfwkvn0qv9toivr99kw276u0fdp3b882hony3tr5ug9z5les5qyp3x4ozoimwrd4l46bks0fzqmutyr8a7yy2t0c46wbtcd7wxbropxgt2x65qzsxfq2vs9hrsjgat9l81u4rsnwt46fjidn4uj52dq36caj6cgxs2ibovimrkj6f8yijmsa4rds6b1ln374eq7yn61rx5upc4aqob58cen4ovet7kr2hgtwa5ni5whaec9si8w0zfei63oy7q14oizfdc69t29ee5v3fic33yo6qfvxtybn77fxyo32lowpgub3a1yw4z7wkp2xlkctg76by33ce0lymcrg47gm8541avvcbzn7imef8givmz29vb71o7nbykye5bmzff8qpuvjpesfqbe8o7ukpfng3ph8ygmjt90ubwhx200xxcmdhswhve605oq17ypxezq39rel4282ddp1mhq7dl57yemdxwrx4ixx1eojrg4enawt6q2oh3o6jw3lt9rki5hejgfbnhesakwzoy89udi2slv8ur3xdg6bhdz5x2emu1ox1ibnro2vnio9i1bo1aqspj5r9xnwh42vd105p7c45rmsfg355ebrr39xlyhymxq0jbe5gfecbblzze8re4shbz9ckpugjyu4x7rrrzx94tb7rbgl43i01fdi6gc6caxqxi9ysnorj22k86vghkg7v16njjtcpjoael6o0ma5vfvsbm8mnli6qs7t34oiye16ps4bo2h086np8l3mpka44ky3k8bpo6u8h80h8x1yttvu14hq8ilf9ca5wjjelum7byyz7n5qtlzm3x7w79re5qvp6oy3unf5ktyn0fbgm8pbkmewn0c0d80c0k88xyl38k0lkl4tj5ynjcusdig1y1pwa8t55h1ybhbgvv3kpnas7szcglo5t7tz5tm32s54a7fas4hs70ynkdx3jzkdf6h70qwzgjvbmqlj0lbt2hsbdeqi7jbtnr72jyy1v8ys3265s8rddqrtmi7agqri5tleprbyt6noy1ev1lgoai0zvfznhdxzm5lxorkq3dyxlhjf55nmkzt49jkq4c208oqgf77vk9qgzzi2os3m9w5b6ky4yvcuhjm4ucljva0gmvhucq3le5oxts1nvmdh590qtoviope96nsncigeslqdw54lc4w21rhu1zkc3j2ddr9vcxip238jkacb602mcu84radsljwxt6k9ly983mrzee8espvhow3k9459c63lh9dlrze75pey7pvw5a510c539chupyz95dl9rf30kpofluivdgzdchmyiuqmkmqgamooam6eaus6x3xwbgp4xpw7ojsfexsxs702pasi39lkd2ys0vigb66e5pbpd5d7zz2enq5hut7f6ql9nma3ezozi7gw1ak4g6e5km7ca4h96xixnqhe4d7oa92pl85c8u82jqxubjv40z9z4m206vq2s6sfygo67rr693m5f519pqhofo838vq46cc721lcjlqifhno6urxeu22s8d28wmky00w7ngcmoyeftsipv57mjo5ffzigby4p4mb3enn2aw3rmbv0dl1wtox3w95f1im3owen5erb4legjb6k22bcunouqzprpw7brgl2wz2wdtg8ljyqq7sqp890pe5a5gxj5idqha3espahaofqhhbhyoxnjjrvk2y6gxarwa69kld31xnc79r5nhseo8pden0yih8ixita3odj4zgpr9aza65ifjy1gxy821gu1p0pmyjservk2ev9x0lnue2zbag9f1mvwozkyptffb4kom9f2oi7440fym0vbip9h412pz89b5t28ds9dpv8lo8ogsokuwghrfyzdu3216zxmcvvzq73zgiv2t2zith4w9b26kz94bmvsh6r0i0fsvwb29gnae16dv3hepmuhaxlzcupxmmd2ms79vg68lh5a == \g\t\2\i\j\p\i\u\v\1\j\f\t\x\x\3\e\s\v\a\2\5\9\w\t\f\1\r\g\4\4\s\j\w\a\e\9\r\e\4\z\q\3\g\q\v\1\p\e\q\p\s\p\s\b\y\o\o\z\p\o\x\8\b\t\1\o\z\c\4\b\s\d\b\h\v\i\h\v\m\j\0\0\m\w\t\o\9\8\k\9\w\i\g\l\f\g\d\9\o\5\h\y\u\1\y\s\q\r\l\k\m\h\t\t\t\e\q\u\a\e\r\h\j\k\u\d\y\k\5\q\1\7\c\1\7\3\8\8\q\t\y\h\p\9\6\s\s\q\o\d\f\a\l\m\j\w\z\y\4\u\e\x\8\g\i\3\1\q\7\p\7\x\p\q\j\n\e\u\a\2\2\7\z\s\l\1\g\2\k\e\9\1\t\0\s\l\j\0\a\p\5\8\l\z\m\j\z\5\h\k\j\g\k\n\e\9\f\h\j\5\m\b\g\4\8\u\z\j\n\6\f\a\h\v\9\1\a\m\d\a\d\v\3\h\u\h\u\z\z\l\z\e\w\2\i\2\d\v\9\z\7\u\y\9\3\2\q\6\q\c\f\b\0\c\d\m\q\n\j\r\1\x\b\l\c\j\l\5\b\v\3\f\n\n\8\6\2\q\s\o\x\z\h\4\9\2\6\a\2\z\o\3\a\i\8\w\5\q\3\v\3\v\5\j\m\t\r\o\0\g\z\k\x\r\r\3\a\k\6\m\s\p\1\3\a\z\a\k\t\e\9\y\b\g\6\4\o\d\x\h\4\f\w\8\z\6\9\x\x\i\g\i\8\c\9\6\w\a\6\a\9\w\c\2\k\5\s\m\4\s\k\d\e\b\7\4\1\w\x\x\m\2\w\6\c\1\l\3\p\2\t\8\v\m\m\f\f\z\x\e\e\c\o\d\o\d\6\k\1\9\k\6\p\v\k\5\m\4\s\6\0\t\c\e\h\3\7\x\e\0\h\4\6\t\b\v\y\8\l\z\5\6\v\w\8\e\7\i\k\8\p\i\y\p\e\v\d\6\p\6\9\f\b\j\t\p\t\8\q\n\v\m\z\2\m\1\y\k\q\d\1\3\o\g\l\4\3\1\d\5\s\0\3\g\j\y\1\t\h\1\a\4\d\a\g\s\f\7\i\9\a\n\w\1\h\j\3\p\l\d\9\m\o\e\r\c\4\e\2\k\5\f\j\s\w\q\o\y\2\f\3\i\w\9\5\k\0\v\g\8\m\7\3\y\0\q\0\a\5\j\7\2\4\6\l\m\v\i\a\j\q\1\m\3\9\h\6\y\2\1\8\k\h\s\l\c\n\w\g\g\u\m\t\r\8\i\6\p\6\g\u\9\t\q\i\x\i\p\y\c\k\r\f\o\q\9\a\1\3\3\b\6\l\z\f\1\7\7\r\d\t\c\c\z\b\1\1\3\f\3\2\6\q\f\0\4\9\r\3\3\2\5\k\f\7\u\3\y\u\q\v\5\7\2\f\a\h\o\g\1\x\1\0\g\m\9\6\b\b\8\a\k\w\5\w\e\a\c\v\y\r\9\w\t\w\x\l\q\j\a\v\s\v\4\0\2\x\y\h\6\3\s\u\d\4\h\r\z\r\n\a\b\a\x\b\c\8\p\y\c\a\n\d\i\f\s\j\o\v\r\v\t\9\l\w\f\j\r\5\5\f\i\5\y\z\t\r\q\c\0\m\w\i\w\9\z\6\m\p\f\t\8\x\5\f\v\m\d\0\x\4\c\q\3\4\5\n\2\7\i\q\c\q\p\w\c\5\4\p\z\9\s\2\p\s\4\x\q\7\m\4\x\b\z\w\c\v\s\9\b\2\5\h\5\d\5\t\7\s\e\o\d\8\x\m\n\f\d\9\6\1\k\t\l\t\4\8\7\m\w\f\7\k\4\c\x\t\d\4\h\4\c\v\w\9\a\b\a\m\8\t\j\j\e\o\t\p\3\h\v\g\s\a\4\p\7\s\n\v\7\9\q\w\i\x\3\4\n\i\7\s\q\j\n\2\k\l\w\7\l\8\8\b\p\p\i\k\x\5\y\m\b\x\x\l\6\m\p\c\c\l\b\7\n\m\4\m\2\8\0\3\n\w\h\3\g\p\v\s\f\p\f\p\z\l\v\x\m\m\5\y\n\b\6\i\v\j\d\y\q\a\q\s\2\b\8\o\j\6\q\l\x\k\7\5\f\f\x\q\o\g\k\m\9\c\y\3\z\g\t\8\f\w\1\v\r\h\t\o\u\3\u\i\i\n\t\0\m\u\0\f\2\w\9\p\b\k\e\e\k\5\r\r\w\l\f\q\q\e\5\s\4\n\h\q\n\x\y\w\h\8\i\7\n\a\b\v\s\v\p\f\g\x\6\s\f\y\t\h\t\6\0\5\j\r\m\1\6\r\z\1\6\n\q\b\3\7\m\y\o\j\f\s\h\7\1\x\f\u\c\2\0\9\e\q\b\u\n\h\j\k\o\w\7\5\v\s\g\b\0\o\o\q\k\i\w\7\0\e\j\b\g\w\o\m\1\c\6\f\q\t\2\2\q\e\7\j\6\h\5\s\i\c\7\d\s\v\2\o\7\j\d\q\d\f\j\b\x\0\p\9\1\v\o\k\n\p\q\4\c\9\w\1\0\4\9\j\5\i\n\h\e\m\a\0\k\d\u\r\n\n\2\6\u\j\t\f\4\h\j\a\0\r\q\0\z\n\8\1\p\y\b\j\8\c\a\k\2\3\1\3\7\9\f\a\i\g\s\d\9\k\r\2\h\6\6\f\y\3\g\e\o\c\u\4\o\g\9\w\s\i\g\q\f\r\m\n\p\1\p\0\d\5\o\6\8\a\v\f\w\s\t\q\0\b\z\5\k\u\v\y\k\u\g\h\1\s\v\q\x\0\t\i\p\d\g\y\m\s\8\h\2\0\3\3\q\r\x\9\t\x\7\s\7\9\9\0\i\k\2\o\4\l\s\2\k\j\j\z\3\i\5\j\0\d\4\l\s\1\d\u\v\j\5\4\1\w\p\i\n\7\f\2\6\i\s\p\5\l\g\0\0\3\j\n\s\e\a\i\l\6\t\i\y\6\2\a\p\t\1\m\b\q\e\1\d\e\4\1\c\7\n\z\n\a\r\9\d\z\a\k\7\h\o\q\a\m\t\l\u\t\q\9\4\7\i\h\d\t\1\f\6\r\q\n\9\d\b\x\8\m\c\8\y\c\8\2\9\p\e\g\g\q\v\w\d\k\r\s\r\p\b\j\2\l\9\u\5\m\b\m\3\c\n\y\l\y\l\c\7\x\v\h\y\9\y\7\7\e\x\q\z\f\x\x\0\y\9\q\f\u\h\7\a\f\a\y\i\i\j\7\4\c\x\w\l\g\i\o\n\t\3\q\u\u\2\5\q\m\j\y\h\6\z\x\2\u\b\f\5\h\z\u\h\r\g\b\o\v\i\0\a\y\8\m\2\1\x\c\p\r\a\y\1\f\p\w\n\n\w\r\6\b\8\x\3\4\y\7\8\r\b\0\c\d\x\t\c\3\r\z\y\g\y\6\3\9\8\y\6\y\x\8\3\h\4\w\3\z\k\v\b\r\s\7\e\3\v\1\c\x\y\w\k\8\j\s\s\f\i\g\m\n\1\f\m\l\3\x\9\2\q\s\6\7\x\1\x\y\p\v\j\a\6\m\y\z\q\p\1\i\m\w\c\u\6\m\r\c\g\4\1\3\o\7\4\b\5\f\z\q\9\6\y\i\8\f\d\i\a\j\b\w\m\t\9\l\j\t\i\q\o\1\w\e\6\k\8\4\0\3\c\q\e\h\k\2\o\j\u\f\d\a\8\n\c\2\o\k\4\q\6\f\j\3\v\u\0\a\h\0\7\4\l\f\1\3\g\r\7\t\i\r\g\a\9\h\u\t\e\e\e\3\a\d\2\p\e\k\0\v\c\z\4\p\x\p\k\w\u\b\d\s\8\0\n\r\v\8\d\1\1\h\h\l\q\4\8\q\d\t\d\3\6\j\1\g\n\h\x\v\g\k\z\x\o\j\i\a\g\s\v\c\j\8\x\2\l\f\q\j\0\5\x\8\w\j\u\l\f\s\z\e\h\n\q\2\g\x\f\3\u\b\m\i\d\2\j\p\w\i\g\w\u\k\j\7\a\k\o\c\9\x\o\3\x\6\9\d\0\w\n\h\0\l\e\l\5\q\s\z\i\y\j\n\j\8\f\y\n\9\u\w\3\s\w\f\n\2\d\h\z\j\d\f\2\y\f\i\e\d\u\9\7\6\y\g\q\6\5\b\v\9\g\g\z\v\t\c\6\7\y\s\m\6\1\d\z\b\f\h\p\3\j\g\f\u\l\8\p\1\r\i\j\o\p\f\a\4\t\7\i\3\u\m\u\7\2\3\7\j\t\1\m\u\y\4\p\d\1\2\q\1\h\x\5\s\x\i\8\g\6\1\s\v\g\q\4\7\b\g\g\p\m\y\3\w\a\2\n\j\w\d\a\q\4\t\a\d\2\0\l\q\8\a\g\e\0\q\j\9\t\r\b\g\y\t\7\z\7\c\4\g\o\k\j\a\m\m\t\t\4\x\q\b\i\m\m\v\v\u\c\n\1\j\e\6\5\s\o\1\6\7\0\8\8\u\i\9\r\3\0\m\8\8\d\i\g\l\y\n\1\n\2\1\9\e\d\7\q\j\z\3\x\f\w\k\v\n\0\q\v\9\t\o\i\v\r\9\9\k\w\2\7\6\u\0\f\d\p\3\b\8\8\2\h\o\n\y\3\t\r\5\u\g\9\z\5\l\e\s\5\q\y\p\3\x\4\o\z\o\i\m\w\r\d\4\l\4\6\b\k\s\0\f\z\q\m\u\t\y\r\8\a\7\y\y\2\t\0\c\4\6\w\b\t\c\d\7\w\x\b\r\o\p\x\g\t\2\x\6\5\q\z\s\x\f\q\2\v\s\9\h\r\s\j\g\a\t\9\l\8\1\u\4\r\s\n\w\t\4\6\f\j\i\d\n\4\u\j\5\2\d\q\3\6\c\a\j\6\c\g\x\s\2\i\b\o\v\i\m\r\k\j\6\f\8\y\i\j\m\s\a\4\r\d\s\6\b\1\l\n\3\7\4\e\q\7\y\n\6\1\r\x\5\u\p\c\4\a\q\o\b\5\8\c\e\n\4\o\v\e\t\7\k\r\2\h\g\t\w\a\5\n\i\5\w\h\a\e\c\9\s\i\8\w\0\z\f\e\i\6\3\o\y\7\q\1\4\o\i\z\f\d\c\6\9\t\2\9\e\e\5\v\3\f\i\c\3\3\y\o\6\q\f\v\x\t\y\b\n\7\7\f\x\y\o\3\2\l\o\w\p\g\u\b\3\a\1\y\w\4\z\7\w\k\p\2\x\l\k\c\t\g\7\6\b\y\3\3\c\e\0\l\y\m\c\r\g\4\7\g\m\8\5\4\1\a\v\v\c\b\z\n\7\i\m\e\f\8\g\i\v\m\z\2\9\v\b\7\1\o\7\n\b\y\k\y\e\5\b\m\z\f\f\8\q\p\u\v\j\p\e\s\f\q\b\e\8\o\7\u\k\p\f\n\g\3\p\h\8\y\g\m\j\t\9\0\u\b\w\h\x\2\0\0\x\x\c\m\d\h\s\w\h\v\e\6\0\5\o\q\1\7\y\p\x\e\z\q\3\9\r\e\l\4\2\8\2\d\d\p\1\m\h\q\7\d\l\5\7\y\e\m\d\x\w\r\x\4\i\x\x\1\e\o\j\r\g\4\e\n\a\w\t\6\q\2\o\h\3\o\6\j\w\3\l\t\9\r\k\i\5\h\e\j\g\f\b\n\h\e\s\a\k\w\z\o\y\8\9\u\d\i\2\s\l\v\8\u\r\3\x\d\g\6\b\h\d\z\5\x\2\e\m\u\1\o\x\1\i\b\n\r\o\2\v\n\i\o\9\i\1\b\o\1\a\q\s\p\j\5\r\9\x\n\w\h\4\2\v\d\1\0\5\p\7\c\4\5\r\m\s\f\g\3\5\5\e\b\r\r\3\9\x\l\y\h\y\m\x\q\0\j\b\e\5\g\f\e\c\b\b\l\z\z\e\8\r\e\4\s\h\b\z\9\c\k\p\u\g\j\y\u\4\x\7\r\r\r\z\x\9\4\t\b\7\r\b\g\l\4\3\i\0\1\f\d\i\6\g\c\6\c\a\x\q\x\i\9\y\s\n\o\r\j\2\2\k\8\6\v\g\h\k\g\7\v\1\6\n\j\j\t\c\p\j\o\a\e\l\6\o\0\m\a\5\v\f\v\s\b\m\8\m\n\l\i\6\q\s\7\t\3\4\o\i\y\e\1\6\p\s\4\b\o\2\h\0\8\6\n\p\8\l\3\m\p\k\a\4\4\k\y\3\k\8\b\p\o\6\u\8\h\8\0\h\8\x\1\y\t\t\v\u\1\4\h\q\8\i\l\f\9\c\a\5\w\j\j\e\l\u\m\7\b\y\y\z\7\n\5\q\t\l\z\m\3\x\7\w\7\9\r\e\5\q\v\p\6\o\y\3\u\n\f\5\k\t\y\n\0\f\b\g\m\8\p\b\k\m\e\w\n\0\c\0\d\8\0\c\0\k\8\8\x\y\l\3\8\k\0\l\k\l\4\t\j\5\y\n\j\c\u\s\d\i\g\1\y\1\p\w\a\8\t\5\5\h\1\y\b\h\b\g\v\v\3\k\p\n\a\s\7\s\z\c\g\l\o\5\t\7\t\z\5\t\m\3\2\s\5\4\a\7\f\a\s\4\h\s\7\0\y\n\k\d\x\3\j\z\k\d\f\6\h\7\0\q\w\z\g\j\v\b\m\q\l\j\0\l\b\t\2\h\s\b\d\e\q\i\7\j\b\t\n\r\7\2\j\y\y\1\v\8\y\s\3\2\6\5\s\8\r\d\d\q\r\t\m\i\7\a\g\q\r\i\5\t\l\e\p\r\b\y\t\6\n\o\y\1\e\v\1\l\g\o\a\i\0\z\v\f\z\n\h\d\x\z\m\5\l\x\o\r\k\q\3\d\y\x\l\h\j\f\5\5\n\m\k\z\t\4\9\j\k\q\4\c\2\0\8\o\q\g\f\7\7\v\k\9\q\g\z\z\i\2\o\s\3\m\9\w\5\b\6\k\y\4\y\v\c\u\h\j\m\4\u\c\l\j\v\a\0\g\m\v\h\u\c\q\3\l\e\5\o\x\t\s\1\n\v\m\d\h\5\9\0\q\t\o\v\i\o\p\e\9\6\n\s\n\c\i\g\e\s\l\q\d\w\5\4\l\c\4\w\2\1\r\h\u\1\z\k\c\3\j\2\d\d\r\9\v\c\x\i\p\2\3\8\j\k\a\c\b\6\0\2\m\c\u\8\4\r\a\d\s\l\j\w\x\t\6\k\9\l\y\9\8\3\m\r\z\e\e\8\e\s\p\v\h\o\w\3\k\9\4\5\9\c\6\3\l\h\9\d\l\r\z\e\7\5\p\e\y\7\p\v\w\5\a\5\1\0\c\5\3\9\c\h\u\p\y\z\9\5\d\l\9\r\f\3\0\k\p\o\f\l\u\i\v\d\g\z\d\c\h\m\y\i\u\q\m\k\m\q\g\a\m\o\o\a\m\6\e\a\u\s\6\x\3\x\w\b\g\p\4\x\p\w\7\o\j\s\f\e\x\s\x\s\7\0\2\p\a\s\i\3\9\l\k\d\2\y\s\0\v\i\g\b\6\6\e\5\p\b\p\d\5\d\7\z\z\2\e\n\q\5\h\u\t\7\f\6\q\l\9\n\m\a\3\e\z\o\z\i\7\g\w\1\a\k\4\g\6\e\5\k\m\7\c\a\4\h\9\6\x\i\x\n\q\h\e\4\d\7\o\a\9\2\p\l\8\5\c\8\u\8\2\j\q\x\u\b\j\v\4\0\z\9\z\4\m\2\0\6\v\q\2\s\6\s\f\y\g\o\6\7\r\r\6\9\3\m\5\f\5\1\9\p\q\h\o\f\o\8\3\8\v\q\4\6\c\c\7\2\1\l\c\j\l\q\i\f\h\n\o\6\u\r\x\e\u\2\2\s\8\d\2\8\w\m\k\y\0\0\w\7\n\g\c\m\o\y\e\f\t\s\i\p\v\5\7\m\j\o\5\f\f\z\i\g\b\y\4\p\4\m\b\3\e\n\n\2\a\w\3\r\m\b\v\0\d\l\1\w\t\o\x\3\w\9\5\f\1\i\m\3\o\w\e\n\5\e\r\b\4\l\e\g\j\b\6\k\2\2\b\c\u\n\o\u\q\z\p\r\p\w\7\b\r\g\l\2\w\z\2\w\d\t\g\8\l\j\y\q\q\7\s\q\p\8\9\0\p\e\5\a\5\g\x\j\5\i\d\q\h\a\3\e\s\p\a\h\a\o\f\q\h\h\b\h\y\o\x\n\j\j\r\v\k\2\y\6\g\x\a\r\w\a\6\9\k\l\d\3\1\x\n\c\7\9\r\5\n\h\s\e\o\8\p\d\e\n\0\y\i\h\8\i\x\i\t\a\3\o\d\j\4\z\g\p\r\9\a\z\a\6\5\i\f\j\y\1\g\x\y\8\2\1\g\u\1\p\0\p\m\y\j\s\e\r\v\k\2\e\v\9\x\0\l\n\u\e\2\z\b\a\g\9\f\1\m\v\w\o\z\k\y\p\t\f\f\b\4\k\o\m\9\f\2\o\i\7\4\4\0\f\y\m\0\v\b\i\p\9\h\4\1\2\p\z\8\9\b\5\t\2\8\d\s\9\d\p\v\8\l\o\8\o\g\s\o\k\u\w\g\h\r\f\y\z\d\u\3\2\1\6\z\x\m\c\v\v\z\q\7\3\z\g\i\v\2\t\2\z\i\t\h\4\w\9\b\2\6\k\z\9\4\b\m\v\s\h\6\r\0\i\0\f\s\v\w\b\2\9\g\n\a\e\1\6\d\v\3\h\e\p\m\u\h\a\x\l\z\c\u\p\x\m\m\d\2\m\s\7\9\v\g\6\8\l\h\5\a ]] 00:08:03.869 00:08:03.869 real 0m0.973s 00:08:03.869 user 0m0.659s 00:08:03.869 sys 0m0.385s 00:08:03.869 02:52:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:03.869 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.869 ************************************ 00:08:03.869 END TEST dd_rw_offset 00:08:03.869 ************************************ 00:08:03.869 02:52:42 -- dd/basic_rw.sh@1 -- # cleanup 00:08:03.869 02:52:42 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:03.869 02:52:42 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:03.869 02:52:42 -- dd/common.sh@11 -- # local nvme_ref= 00:08:03.869 02:52:42 -- dd/common.sh@12 -- # local size=0xffff 00:08:03.869 02:52:42 -- dd/common.sh@14 -- # local bs=1048576 00:08:03.869 02:52:42 -- dd/common.sh@15 -- # local count=1 00:08:03.869 02:52:42 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:03.869 02:52:42 -- dd/common.sh@18 -- # gen_conf 00:08:03.869 02:52:42 -- dd/common.sh@31 -- # xtrace_disable 00:08:03.869 02:52:42 -- common/autotest_common.sh@10 -- # set +x 00:08:03.869 [2024-04-23 02:52:42.986385] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:03.869 [2024-04-23 02:52:42.986504] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76451 ] 00:08:03.869 { 00:08:03.869 "subsystems": [ 00:08:03.869 { 00:08:03.869 "subsystem": "bdev", 00:08:03.869 "config": [ 00:08:03.869 { 00:08:03.869 "params": { 00:08:03.869 "trtype": "pcie", 00:08:03.869 "traddr": "0000:00:10.0", 00:08:03.869 "name": "Nvme0" 00:08:03.869 }, 00:08:03.869 "method": "bdev_nvme_attach_controller" 00:08:03.869 }, 00:08:03.869 { 00:08:03.869 "method": "bdev_wait_for_examine" 00:08:03.869 } 00:08:03.869 ] 00:08:03.869 } 00:08:03.869 ] 00:08:03.869 } 00:08:04.129 [2024-04-23 02:52:43.106811] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.129 [2024-04-23 02:52:43.125542] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.129 [2024-04-23 02:52:43.156026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.386  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:04.386 00:08:04.386 02:52:43 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.386 00:08:04.386 real 0m14.192s 00:08:04.386 user 0m10.173s 00:08:04.386 sys 0m4.498s 00:08:04.386 02:52:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:04.386 ************************************ 00:08:04.386 END TEST spdk_dd_basic_rw 00:08:04.386 02:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.386 ************************************ 00:08:04.387 02:52:43 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:04.387 02:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.387 02:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.387 02:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.387 ************************************ 00:08:04.387 START TEST spdk_dd_posix 00:08:04.387 ************************************ 00:08:04.387 02:52:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:04.645 * Looking for test storage... 00:08:04.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:04.645 02:52:43 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:04.645 02:52:43 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.645 02:52:43 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.645 02:52:43 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.645 02:52:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.645 02:52:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.645 02:52:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.645 02:52:43 -- paths/export.sh@5 -- # export PATH 00:08:04.645 02:52:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.645 02:52:43 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:04.645 02:52:43 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:04.645 02:52:43 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:04.645 02:52:43 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:04.645 02:52:43 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:04.645 02:52:43 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.645 02:52:43 -- dd/posix.sh@130 -- # tests 00:08:04.645 02:52:43 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:04.645 * First test run, liburing in use 00:08:04.645 02:52:43 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:04.645 02:52:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:04.645 02:52:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.645 02:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.645 ************************************ 00:08:04.645 START TEST dd_flag_append 00:08:04.645 ************************************ 00:08:04.645 02:52:43 -- common/autotest_common.sh@1111 -- # append 00:08:04.645 02:52:43 -- dd/posix.sh@16 -- # local dump0 00:08:04.645 02:52:43 -- dd/posix.sh@17 -- # local dump1 00:08:04.645 02:52:43 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:04.645 02:52:43 -- dd/common.sh@98 -- # xtrace_disable 00:08:04.645 02:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.645 02:52:43 -- dd/posix.sh@19 -- # dump0=k6z5qza4enqwvfeaxv8xzpgsrp6t975p 00:08:04.645 02:52:43 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:04.645 02:52:43 -- dd/common.sh@98 -- # xtrace_disable 00:08:04.645 02:52:43 -- common/autotest_common.sh@10 -- # set +x 00:08:04.645 02:52:43 -- dd/posix.sh@20 -- # dump1=dujqdcpptypkxp5fgzaru48w28i2vzw7 00:08:04.645 02:52:43 -- dd/posix.sh@22 -- # printf %s k6z5qza4enqwvfeaxv8xzpgsrp6t975p 00:08:04.645 02:52:43 -- dd/posix.sh@23 -- # printf %s dujqdcpptypkxp5fgzaru48w28i2vzw7 00:08:04.645 02:52:43 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:04.645 [2024-04-23 02:52:43.735189] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:04.645 [2024-04-23 02:52:43.735267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76520 ] 00:08:04.904 [2024-04-23 02:52:43.855431] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:04.904 [2024-04-23 02:52:43.866672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.904 [2024-04-23 02:52:43.902126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.904  Copying: 32/32 [B] (average 31 kBps) 00:08:04.904 00:08:05.162 02:52:44 -- dd/posix.sh@27 -- # [[ dujqdcpptypkxp5fgzaru48w28i2vzw7k6z5qza4enqwvfeaxv8xzpgsrp6t975p == \d\u\j\q\d\c\p\p\t\y\p\k\x\p\5\f\g\z\a\r\u\4\8\w\2\8\i\2\v\z\w\7\k\6\z\5\q\z\a\4\e\n\q\w\v\f\e\a\x\v\8\x\z\p\g\s\r\p\6\t\9\7\5\p ]] 00:08:05.162 00:08:05.162 real 0m0.381s 00:08:05.162 user 0m0.189s 00:08:05.162 sys 0m0.156s 00:08:05.162 02:52:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.162 ************************************ 00:08:05.162 END TEST dd_flag_append 00:08:05.162 ************************************ 00:08:05.162 02:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.162 02:52:44 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:05.162 02:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.163 02:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.163 02:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.163 ************************************ 00:08:05.163 START TEST dd_flag_directory 00:08:05.163 ************************************ 00:08:05.163 02:52:44 -- common/autotest_common.sh@1111 -- # directory 00:08:05.163 02:52:44 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.163 02:52:44 -- common/autotest_common.sh@638 -- # local es=0 00:08:05.163 02:52:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.163 02:52:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.163 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.163 02:52:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.163 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.163 02:52:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.163 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.163 02:52:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.163 02:52:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.163 02:52:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.163 [2024-04-23 02:52:44.215540] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:05.163 [2024-04-23 02:52:44.215751] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76552 ] 00:08:05.421 [2024-04-23 02:52:44.329920] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.421 [2024-04-23 02:52:44.343617] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.421 [2024-04-23 02:52:44.374318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.421 [2024-04-23 02:52:44.414033] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.421 [2024-04-23 02:52:44.414085] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.421 [2024-04-23 02:52:44.414118] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.421 [2024-04-23 02:52:44.470345] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.421 02:52:44 -- common/autotest_common.sh@641 -- # es=236 00:08:05.421 02:52:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:05.421 02:52:44 -- common/autotest_common.sh@650 -- # es=108 00:08:05.421 02:52:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:05.421 02:52:44 -- common/autotest_common.sh@658 -- # es=1 00:08:05.421 02:52:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:05.421 02:52:44 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.421 02:52:44 -- common/autotest_common.sh@638 -- # local es=0 00:08:05.421 02:52:44 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.421 02:52:44 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.421 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.421 02:52:44 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.421 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.421 02:52:44 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.421 02:52:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.421 02:52:44 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.421 02:52:44 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.421 02:52:44 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.680 [2024-04-23 02:52:44.591150] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:05.680 [2024-04-23 02:52:44.591246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76562 ] 00:08:05.680 [2024-04-23 02:52:44.711326] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:05.680 [2024-04-23 02:52:44.727358] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.680 [2024-04-23 02:52:44.757495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.680 [2024-04-23 02:52:44.797613] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.680 [2024-04-23 02:52:44.797665] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.680 [2024-04-23 02:52:44.797697] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.939 [2024-04-23 02:52:44.854435] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.939 02:52:44 -- common/autotest_common.sh@641 -- # es=236 00:08:05.939 02:52:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:05.939 02:52:44 -- common/autotest_common.sh@650 -- # es=108 00:08:05.939 02:52:44 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:05.939 02:52:44 -- common/autotest_common.sh@658 -- # es=1 00:08:05.939 02:52:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:05.939 00:08:05.939 real 0m0.751s 00:08:05.939 user 0m0.374s 00:08:05.939 sys 0m0.167s 00:08:05.939 02:52:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:05.939 ************************************ 00:08:05.939 END TEST dd_flag_directory 00:08:05.939 ************************************ 00:08:05.939 02:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.939 02:52:44 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:05.939 02:52:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:05.939 02:52:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:05.939 02:52:44 -- common/autotest_common.sh@10 -- # set +x 00:08:05.939 ************************************ 00:08:05.939 START TEST dd_flag_nofollow 00:08:05.939 ************************************ 00:08:05.939 02:52:45 -- common/autotest_common.sh@1111 -- # nofollow 00:08:05.939 02:52:45 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.939 02:52:45 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.939 02:52:45 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:05.939 02:52:45 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:05.939 02:52:45 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.939 02:52:45 -- common/autotest_common.sh@638 -- # local es=0 00:08:05.939 02:52:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.939 02:52:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.939 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.939 02:52:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.939 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.939 02:52:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.939 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:05.939 02:52:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.939 02:52:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.939 02:52:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.939 [2024-04-23 02:52:45.080268] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:05.939 [2024-04-23 02:52:45.080350] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76595 ] 00:08:06.198 [2024-04-23 02:52:45.197964] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.198 [2024-04-23 02:52:45.215097] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.198 [2024-04-23 02:52:45.248945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.198 [2024-04-23 02:52:45.292667] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.198 [2024-04-23 02:52:45.292737] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:06.198 [2024-04-23 02:52:45.292770] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.457 [2024-04-23 02:52:45.355852] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.457 02:52:45 -- common/autotest_common.sh@641 -- # es=216 00:08:06.457 02:52:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:06.457 02:52:45 -- common/autotest_common.sh@650 -- # es=88 00:08:06.457 02:52:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:06.457 02:52:45 -- common/autotest_common.sh@658 -- # es=1 00:08:06.457 02:52:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:06.457 02:52:45 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.457 02:52:45 -- common/autotest_common.sh@638 -- # local es=0 00:08:06.457 02:52:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.457 02:52:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.457 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.457 02:52:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.457 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.457 02:52:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.457 02:52:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:06.457 02:52:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.457 02:52:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.457 02:52:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:06.457 [2024-04-23 02:52:45.459363] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:06.457 [2024-04-23 02:52:45.459446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76605 ] 00:08:06.457 [2024-04-23 02:52:45.572911] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.457 [2024-04-23 02:52:45.583238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.457 [2024-04-23 02:52:45.614234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.716 [2024-04-23 02:52:45.654514] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:06.716 [2024-04-23 02:52:45.654585] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:06.716 [2024-04-23 02:52:45.654618] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.716 [2024-04-23 02:52:45.709147] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.716 02:52:45 -- common/autotest_common.sh@641 -- # es=216 00:08:06.716 02:52:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:06.716 02:52:45 -- common/autotest_common.sh@650 -- # es=88 00:08:06.716 02:52:45 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:06.716 02:52:45 -- common/autotest_common.sh@658 -- # es=1 00:08:06.716 02:52:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:06.716 02:52:45 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:06.716 02:52:45 -- dd/common.sh@98 -- # xtrace_disable 00:08:06.716 02:52:45 -- common/autotest_common.sh@10 -- # set +x 00:08:06.716 02:52:45 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.716 [2024-04-23 02:52:45.812036] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:06.716 [2024-04-23 02:52:45.812304] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76607 ] 00:08:06.984 [2024-04-23 02:52:45.926386] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:06.984 [2024-04-23 02:52:45.941150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.984 [2024-04-23 02:52:45.971261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.984  Copying: 512/512 [B] (average 500 kBps) 00:08:06.984 00:08:06.984 ************************************ 00:08:06.984 END TEST dd_flag_nofollow 00:08:06.984 ************************************ 00:08:06.984 02:52:46 -- dd/posix.sh@49 -- # [[ m1wc2pq98owebhje6xnwod4kg96ci4lf7hhxo6to5b7brmkgisn7vgjd3lsm3xcs6b299ectlly6whym4n3cdmljwk0dbb22h1w51dvqqrhh9j7oacbn2p37kjgnr4d5sumgeknhyz9o9n1o5glad3yrsrcpch7uonlo2bz0thjefzuunngz8k9x88ygtnf3mqzpa2zy50ozfo98o5qbxjemdiqtwltk8w4v0orvs0o1nznf8sminj1fltkrybow75cvxdhlmnekipvfy7hlywvff4gwkjtfdit0c41o82k8kxmgfvplyqjp1ds8hqn8np7d3eqn1fv8w41r1eufjz19777moyprzako5jzvb7p3em30vfoyw44y5pu1q526las0vjnnnzfaz0apag0og3pjcyhhwdu5lyph8u9iajjwd1wdnanc8j26qcgr9kiaapeyb7kli8pcc96zjzmki9pok7g1lix0tpobb795e9ydaldw7fmsnu0qkfmiqufc == \m\1\w\c\2\p\q\9\8\o\w\e\b\h\j\e\6\x\n\w\o\d\4\k\g\9\6\c\i\4\l\f\7\h\h\x\o\6\t\o\5\b\7\b\r\m\k\g\i\s\n\7\v\g\j\d\3\l\s\m\3\x\c\s\6\b\2\9\9\e\c\t\l\l\y\6\w\h\y\m\4\n\3\c\d\m\l\j\w\k\0\d\b\b\2\2\h\1\w\5\1\d\v\q\q\r\h\h\9\j\7\o\a\c\b\n\2\p\3\7\k\j\g\n\r\4\d\5\s\u\m\g\e\k\n\h\y\z\9\o\9\n\1\o\5\g\l\a\d\3\y\r\s\r\c\p\c\h\7\u\o\n\l\o\2\b\z\0\t\h\j\e\f\z\u\u\n\n\g\z\8\k\9\x\8\8\y\g\t\n\f\3\m\q\z\p\a\2\z\y\5\0\o\z\f\o\9\8\o\5\q\b\x\j\e\m\d\i\q\t\w\l\t\k\8\w\4\v\0\o\r\v\s\0\o\1\n\z\n\f\8\s\m\i\n\j\1\f\l\t\k\r\y\b\o\w\7\5\c\v\x\d\h\l\m\n\e\k\i\p\v\f\y\7\h\l\y\w\v\f\f\4\g\w\k\j\t\f\d\i\t\0\c\4\1\o\8\2\k\8\k\x\m\g\f\v\p\l\y\q\j\p\1\d\s\8\h\q\n\8\n\p\7\d\3\e\q\n\1\f\v\8\w\4\1\r\1\e\u\f\j\z\1\9\7\7\7\m\o\y\p\r\z\a\k\o\5\j\z\v\b\7\p\3\e\m\3\0\v\f\o\y\w\4\4\y\5\p\u\1\q\5\2\6\l\a\s\0\v\j\n\n\n\z\f\a\z\0\a\p\a\g\0\o\g\3\p\j\c\y\h\h\w\d\u\5\l\y\p\h\8\u\9\i\a\j\j\w\d\1\w\d\n\a\n\c\8\j\2\6\q\c\g\r\9\k\i\a\a\p\e\y\b\7\k\l\i\8\p\c\c\9\6\z\j\z\m\k\i\9\p\o\k\7\g\1\l\i\x\0\t\p\o\b\b\7\9\5\e\9\y\d\a\l\d\w\7\f\m\s\n\u\0\q\k\f\m\i\q\u\f\c ]] 00:08:06.984 00:08:06.984 real 0m1.107s 00:08:06.984 user 0m0.532s 00:08:06.984 sys 0m0.328s 00:08:06.984 02:52:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:06.984 02:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 02:52:46 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:07.261 02:52:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:07.261 02:52:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.261 02:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 ************************************ 00:08:07.261 START TEST dd_flag_noatime 00:08:07.261 ************************************ 00:08:07.261 02:52:46 -- common/autotest_common.sh@1111 -- # noatime 00:08:07.261 02:52:46 -- dd/posix.sh@53 -- # local atime_if 00:08:07.261 02:52:46 -- dd/posix.sh@54 -- # local atime_of 00:08:07.261 02:52:46 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:07.261 02:52:46 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.261 02:52:46 -- common/autotest_common.sh@10 -- # set +x 00:08:07.261 02:52:46 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:07.261 02:52:46 -- dd/posix.sh@60 -- # atime_if=1713840766 00:08:07.261 02:52:46 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.261 02:52:46 -- dd/posix.sh@61 -- # atime_of=1713840766 00:08:07.261 02:52:46 -- dd/posix.sh@66 -- # sleep 1 00:08:08.199 02:52:47 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.199 [2024-04-23 02:52:47.296258] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:08.199 [2024-04-23 02:52:47.296340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76654 ] 00:08:08.458 [2024-04-23 02:52:47.410367] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.458 [2024-04-23 02:52:47.427564] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.458 [2024-04-23 02:52:47.457746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.458  Copying: 512/512 [B] (average 500 kBps) 00:08:08.458 00:08:08.717 02:52:47 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.717 02:52:47 -- dd/posix.sh@69 -- # (( atime_if == 1713840766 )) 00:08:08.717 02:52:47 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.717 02:52:47 -- dd/posix.sh@70 -- # (( atime_of == 1713840766 )) 00:08:08.717 02:52:47 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.717 [2024-04-23 02:52:47.673845] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:08.717 [2024-04-23 02:52:47.673940] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76668 ] 00:08:08.717 [2024-04-23 02:52:47.793962] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:08.717 [2024-04-23 02:52:47.811922] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.717 [2024-04-23 02:52:47.842986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.976  Copying: 512/512 [B] (average 500 kBps) 00:08:08.976 00:08:08.976 02:52:48 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.976 ************************************ 00:08:08.976 END TEST dd_flag_noatime 00:08:08.976 ************************************ 00:08:08.976 02:52:48 -- dd/posix.sh@73 -- # (( atime_if < 1713840767 )) 00:08:08.976 00:08:08.976 real 0m1.780s 00:08:08.976 user 0m0.384s 00:08:08.976 sys 0m0.327s 00:08:08.976 02:52:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:08.976 02:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:08.976 02:52:48 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:08.976 02:52:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:08.976 02:52:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:08.976 02:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:08.976 ************************************ 00:08:08.976 START TEST dd_flags_misc 00:08:08.976 ************************************ 00:08:08.976 02:52:48 -- common/autotest_common.sh@1111 -- # io 00:08:08.976 02:52:48 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:08.976 02:52:48 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:08.976 02:52:48 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:08.976 02:52:48 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:08.976 02:52:48 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:08.976 02:52:48 -- dd/common.sh@98 -- # xtrace_disable 00:08:08.976 02:52:48 -- common/autotest_common.sh@10 -- # set +x 00:08:09.235 02:52:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.235 02:52:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:09.235 [2024-04-23 02:52:48.180074] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:09.235 [2024-04-23 02:52:48.180360] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76700 ] 00:08:09.235 [2024-04-23 02:52:48.295056] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.235 [2024-04-23 02:52:48.308441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.235 [2024-04-23 02:52:48.341474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.494  Copying: 512/512 [B] (average 500 kBps) 00:08:09.494 00:08:09.494 02:52:48 -- dd/posix.sh@93 -- # [[ xbcijr0idcikg9fa5ar3q08d8qdzebavecthrqctf2h3us2e0ud4wsvm31vio2uacm6uvja1gq0z57nbsuc2cqqkz0tna16omvigxyhinyb88pl4hljnzr8fm0osmwe5jsixc1b3ofnyq1z9san6ktnz40l373kj46vebx5gniscah7l19mog7azyx312erff6ebsj1mj9o2w1a9dk36n4nl3b5r43f82t0sfo0ts00820ekt7zkqzw43q7iikyzqk1yfhqjagsv4ht2hckchmf3bbmr5y1nyze8flqzjfpi01kpxbq6etn4yncst42sqptbs8coh96e8ssd47r3hr4orjke0wxa7oo9lwrxbcl8zz5s7szxnhi3aseg5mskg1xm587wab9gx8wrpnk63zo0mlk3yngs36u4vqxcilit54isov154id5h2643wqs6q6gq34wlgo20eyareiktr9x3qgjc2096ben6ptjlxy855qgs5dolrnaa1520m22 == \x\b\c\i\j\r\0\i\d\c\i\k\g\9\f\a\5\a\r\3\q\0\8\d\8\q\d\z\e\b\a\v\e\c\t\h\r\q\c\t\f\2\h\3\u\s\2\e\0\u\d\4\w\s\v\m\3\1\v\i\o\2\u\a\c\m\6\u\v\j\a\1\g\q\0\z\5\7\n\b\s\u\c\2\c\q\q\k\z\0\t\n\a\1\6\o\m\v\i\g\x\y\h\i\n\y\b\8\8\p\l\4\h\l\j\n\z\r\8\f\m\0\o\s\m\w\e\5\j\s\i\x\c\1\b\3\o\f\n\y\q\1\z\9\s\a\n\6\k\t\n\z\4\0\l\3\7\3\k\j\4\6\v\e\b\x\5\g\n\i\s\c\a\h\7\l\1\9\m\o\g\7\a\z\y\x\3\1\2\e\r\f\f\6\e\b\s\j\1\m\j\9\o\2\w\1\a\9\d\k\3\6\n\4\n\l\3\b\5\r\4\3\f\8\2\t\0\s\f\o\0\t\s\0\0\8\2\0\e\k\t\7\z\k\q\z\w\4\3\q\7\i\i\k\y\z\q\k\1\y\f\h\q\j\a\g\s\v\4\h\t\2\h\c\k\c\h\m\f\3\b\b\m\r\5\y\1\n\y\z\e\8\f\l\q\z\j\f\p\i\0\1\k\p\x\b\q\6\e\t\n\4\y\n\c\s\t\4\2\s\q\p\t\b\s\8\c\o\h\9\6\e\8\s\s\d\4\7\r\3\h\r\4\o\r\j\k\e\0\w\x\a\7\o\o\9\l\w\r\x\b\c\l\8\z\z\5\s\7\s\z\x\n\h\i\3\a\s\e\g\5\m\s\k\g\1\x\m\5\8\7\w\a\b\9\g\x\8\w\r\p\n\k\6\3\z\o\0\m\l\k\3\y\n\g\s\3\6\u\4\v\q\x\c\i\l\i\t\5\4\i\s\o\v\1\5\4\i\d\5\h\2\6\4\3\w\q\s\6\q\6\g\q\3\4\w\l\g\o\2\0\e\y\a\r\e\i\k\t\r\9\x\3\q\g\j\c\2\0\9\6\b\e\n\6\p\t\j\l\x\y\8\5\5\q\g\s\5\d\o\l\r\n\a\a\1\5\2\0\m\2\2 ]] 00:08:09.494 02:52:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.494 02:52:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:09.494 [2024-04-23 02:52:48.562074] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:09.494 [2024-04-23 02:52:48.562191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76710 ] 00:08:09.753 [2024-04-23 02:52:48.682303] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:09.753 [2024-04-23 02:52:48.698369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.753 [2024-04-23 02:52:48.731641] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.753  Copying: 512/512 [B] (average 500 kBps) 00:08:09.753 00:08:09.753 02:52:48 -- dd/posix.sh@93 -- # [[ xbcijr0idcikg9fa5ar3q08d8qdzebavecthrqctf2h3us2e0ud4wsvm31vio2uacm6uvja1gq0z57nbsuc2cqqkz0tna16omvigxyhinyb88pl4hljnzr8fm0osmwe5jsixc1b3ofnyq1z9san6ktnz40l373kj46vebx5gniscah7l19mog7azyx312erff6ebsj1mj9o2w1a9dk36n4nl3b5r43f82t0sfo0ts00820ekt7zkqzw43q7iikyzqk1yfhqjagsv4ht2hckchmf3bbmr5y1nyze8flqzjfpi01kpxbq6etn4yncst42sqptbs8coh96e8ssd47r3hr4orjke0wxa7oo9lwrxbcl8zz5s7szxnhi3aseg5mskg1xm587wab9gx8wrpnk63zo0mlk3yngs36u4vqxcilit54isov154id5h2643wqs6q6gq34wlgo20eyareiktr9x3qgjc2096ben6ptjlxy855qgs5dolrnaa1520m22 == \x\b\c\i\j\r\0\i\d\c\i\k\g\9\f\a\5\a\r\3\q\0\8\d\8\q\d\z\e\b\a\v\e\c\t\h\r\q\c\t\f\2\h\3\u\s\2\e\0\u\d\4\w\s\v\m\3\1\v\i\o\2\u\a\c\m\6\u\v\j\a\1\g\q\0\z\5\7\n\b\s\u\c\2\c\q\q\k\z\0\t\n\a\1\6\o\m\v\i\g\x\y\h\i\n\y\b\8\8\p\l\4\h\l\j\n\z\r\8\f\m\0\o\s\m\w\e\5\j\s\i\x\c\1\b\3\o\f\n\y\q\1\z\9\s\a\n\6\k\t\n\z\4\0\l\3\7\3\k\j\4\6\v\e\b\x\5\g\n\i\s\c\a\h\7\l\1\9\m\o\g\7\a\z\y\x\3\1\2\e\r\f\f\6\e\b\s\j\1\m\j\9\o\2\w\1\a\9\d\k\3\6\n\4\n\l\3\b\5\r\4\3\f\8\2\t\0\s\f\o\0\t\s\0\0\8\2\0\e\k\t\7\z\k\q\z\w\4\3\q\7\i\i\k\y\z\q\k\1\y\f\h\q\j\a\g\s\v\4\h\t\2\h\c\k\c\h\m\f\3\b\b\m\r\5\y\1\n\y\z\e\8\f\l\q\z\j\f\p\i\0\1\k\p\x\b\q\6\e\t\n\4\y\n\c\s\t\4\2\s\q\p\t\b\s\8\c\o\h\9\6\e\8\s\s\d\4\7\r\3\h\r\4\o\r\j\k\e\0\w\x\a\7\o\o\9\l\w\r\x\b\c\l\8\z\z\5\s\7\s\z\x\n\h\i\3\a\s\e\g\5\m\s\k\g\1\x\m\5\8\7\w\a\b\9\g\x\8\w\r\p\n\k\6\3\z\o\0\m\l\k\3\y\n\g\s\3\6\u\4\v\q\x\c\i\l\i\t\5\4\i\s\o\v\1\5\4\i\d\5\h\2\6\4\3\w\q\s\6\q\6\g\q\3\4\w\l\g\o\2\0\e\y\a\r\e\i\k\t\r\9\x\3\q\g\j\c\2\0\9\6\b\e\n\6\p\t\j\l\x\y\8\5\5\q\g\s\5\d\o\l\r\n\a\a\1\5\2\0\m\2\2 ]] 00:08:09.753 02:52:48 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.754 02:52:48 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:10.012 [2024-04-23 02:52:48.923591] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:10.012 [2024-04-23 02:52:48.923675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76714 ] 00:08:10.012 [2024-04-23 02:52:49.037313] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.012 [2024-04-23 02:52:49.046845] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.012 [2024-04-23 02:52:49.077462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.272  Copying: 512/512 [B] (average 166 kBps) 00:08:10.272 00:08:10.272 02:52:49 -- dd/posix.sh@93 -- # [[ xbcijr0idcikg9fa5ar3q08d8qdzebavecthrqctf2h3us2e0ud4wsvm31vio2uacm6uvja1gq0z57nbsuc2cqqkz0tna16omvigxyhinyb88pl4hljnzr8fm0osmwe5jsixc1b3ofnyq1z9san6ktnz40l373kj46vebx5gniscah7l19mog7azyx312erff6ebsj1mj9o2w1a9dk36n4nl3b5r43f82t0sfo0ts00820ekt7zkqzw43q7iikyzqk1yfhqjagsv4ht2hckchmf3bbmr5y1nyze8flqzjfpi01kpxbq6etn4yncst42sqptbs8coh96e8ssd47r3hr4orjke0wxa7oo9lwrxbcl8zz5s7szxnhi3aseg5mskg1xm587wab9gx8wrpnk63zo0mlk3yngs36u4vqxcilit54isov154id5h2643wqs6q6gq34wlgo20eyareiktr9x3qgjc2096ben6ptjlxy855qgs5dolrnaa1520m22 == \x\b\c\i\j\r\0\i\d\c\i\k\g\9\f\a\5\a\r\3\q\0\8\d\8\q\d\z\e\b\a\v\e\c\t\h\r\q\c\t\f\2\h\3\u\s\2\e\0\u\d\4\w\s\v\m\3\1\v\i\o\2\u\a\c\m\6\u\v\j\a\1\g\q\0\z\5\7\n\b\s\u\c\2\c\q\q\k\z\0\t\n\a\1\6\o\m\v\i\g\x\y\h\i\n\y\b\8\8\p\l\4\h\l\j\n\z\r\8\f\m\0\o\s\m\w\e\5\j\s\i\x\c\1\b\3\o\f\n\y\q\1\z\9\s\a\n\6\k\t\n\z\4\0\l\3\7\3\k\j\4\6\v\e\b\x\5\g\n\i\s\c\a\h\7\l\1\9\m\o\g\7\a\z\y\x\3\1\2\e\r\f\f\6\e\b\s\j\1\m\j\9\o\2\w\1\a\9\d\k\3\6\n\4\n\l\3\b\5\r\4\3\f\8\2\t\0\s\f\o\0\t\s\0\0\8\2\0\e\k\t\7\z\k\q\z\w\4\3\q\7\i\i\k\y\z\q\k\1\y\f\h\q\j\a\g\s\v\4\h\t\2\h\c\k\c\h\m\f\3\b\b\m\r\5\y\1\n\y\z\e\8\f\l\q\z\j\f\p\i\0\1\k\p\x\b\q\6\e\t\n\4\y\n\c\s\t\4\2\s\q\p\t\b\s\8\c\o\h\9\6\e\8\s\s\d\4\7\r\3\h\r\4\o\r\j\k\e\0\w\x\a\7\o\o\9\l\w\r\x\b\c\l\8\z\z\5\s\7\s\z\x\n\h\i\3\a\s\e\g\5\m\s\k\g\1\x\m\5\8\7\w\a\b\9\g\x\8\w\r\p\n\k\6\3\z\o\0\m\l\k\3\y\n\g\s\3\6\u\4\v\q\x\c\i\l\i\t\5\4\i\s\o\v\1\5\4\i\d\5\h\2\6\4\3\w\q\s\6\q\6\g\q\3\4\w\l\g\o\2\0\e\y\a\r\e\i\k\t\r\9\x\3\q\g\j\c\2\0\9\6\b\e\n\6\p\t\j\l\x\y\8\5\5\q\g\s\5\d\o\l\r\n\a\a\1\5\2\0\m\2\2 ]] 00:08:10.272 02:52:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.272 02:52:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:10.272 [2024-04-23 02:52:49.275142] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:10.272 [2024-04-23 02:52:49.275228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76723 ] 00:08:10.272 [2024-04-23 02:52:49.388706] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.272 [2024-04-23 02:52:49.398636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.531 [2024-04-23 02:52:49.435089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.531  Copying: 512/512 [B] (average 500 kBps) 00:08:10.531 00:08:10.531 02:52:49 -- dd/posix.sh@93 -- # [[ xbcijr0idcikg9fa5ar3q08d8qdzebavecthrqctf2h3us2e0ud4wsvm31vio2uacm6uvja1gq0z57nbsuc2cqqkz0tna16omvigxyhinyb88pl4hljnzr8fm0osmwe5jsixc1b3ofnyq1z9san6ktnz40l373kj46vebx5gniscah7l19mog7azyx312erff6ebsj1mj9o2w1a9dk36n4nl3b5r43f82t0sfo0ts00820ekt7zkqzw43q7iikyzqk1yfhqjagsv4ht2hckchmf3bbmr5y1nyze8flqzjfpi01kpxbq6etn4yncst42sqptbs8coh96e8ssd47r3hr4orjke0wxa7oo9lwrxbcl8zz5s7szxnhi3aseg5mskg1xm587wab9gx8wrpnk63zo0mlk3yngs36u4vqxcilit54isov154id5h2643wqs6q6gq34wlgo20eyareiktr9x3qgjc2096ben6ptjlxy855qgs5dolrnaa1520m22 == \x\b\c\i\j\r\0\i\d\c\i\k\g\9\f\a\5\a\r\3\q\0\8\d\8\q\d\z\e\b\a\v\e\c\t\h\r\q\c\t\f\2\h\3\u\s\2\e\0\u\d\4\w\s\v\m\3\1\v\i\o\2\u\a\c\m\6\u\v\j\a\1\g\q\0\z\5\7\n\b\s\u\c\2\c\q\q\k\z\0\t\n\a\1\6\o\m\v\i\g\x\y\h\i\n\y\b\8\8\p\l\4\h\l\j\n\z\r\8\f\m\0\o\s\m\w\e\5\j\s\i\x\c\1\b\3\o\f\n\y\q\1\z\9\s\a\n\6\k\t\n\z\4\0\l\3\7\3\k\j\4\6\v\e\b\x\5\g\n\i\s\c\a\h\7\l\1\9\m\o\g\7\a\z\y\x\3\1\2\e\r\f\f\6\e\b\s\j\1\m\j\9\o\2\w\1\a\9\d\k\3\6\n\4\n\l\3\b\5\r\4\3\f\8\2\t\0\s\f\o\0\t\s\0\0\8\2\0\e\k\t\7\z\k\q\z\w\4\3\q\7\i\i\k\y\z\q\k\1\y\f\h\q\j\a\g\s\v\4\h\t\2\h\c\k\c\h\m\f\3\b\b\m\r\5\y\1\n\y\z\e\8\f\l\q\z\j\f\p\i\0\1\k\p\x\b\q\6\e\t\n\4\y\n\c\s\t\4\2\s\q\p\t\b\s\8\c\o\h\9\6\e\8\s\s\d\4\7\r\3\h\r\4\o\r\j\k\e\0\w\x\a\7\o\o\9\l\w\r\x\b\c\l\8\z\z\5\s\7\s\z\x\n\h\i\3\a\s\e\g\5\m\s\k\g\1\x\m\5\8\7\w\a\b\9\g\x\8\w\r\p\n\k\6\3\z\o\0\m\l\k\3\y\n\g\s\3\6\u\4\v\q\x\c\i\l\i\t\5\4\i\s\o\v\1\5\4\i\d\5\h\2\6\4\3\w\q\s\6\q\6\g\q\3\4\w\l\g\o\2\0\e\y\a\r\e\i\k\t\r\9\x\3\q\g\j\c\2\0\9\6\b\e\n\6\p\t\j\l\x\y\8\5\5\q\g\s\5\d\o\l\r\n\a\a\1\5\2\0\m\2\2 ]] 00:08:10.531 02:52:49 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:10.531 02:52:49 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:10.531 02:52:49 -- dd/common.sh@98 -- # xtrace_disable 00:08:10.531 02:52:49 -- common/autotest_common.sh@10 -- # set +x 00:08:10.531 02:52:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.531 02:52:49 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:10.531 [2024-04-23 02:52:49.671754] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:10.531 [2024-04-23 02:52:49.671848] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76733 ] 00:08:10.790 [2024-04-23 02:52:49.792395] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:10.790 [2024-04-23 02:52:49.809055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.790 [2024-04-23 02:52:49.839478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.049  Copying: 512/512 [B] (average 500 kBps) 00:08:11.049 00:08:11.049 02:52:49 -- dd/posix.sh@93 -- # [[ 80ng054jd6t9n7qxopiykpft5hen03b8o5brcmn2kz377b341fwvox45oesuuus7jkwj6hhhdya5t1q9d2m5uw53tfvq6d6utdwlr0yt8fznhnpd7q6pd8mzwf2auhu0mks3jljjvehp7dr9m2k6gbjukn4w162km9772a0xphy8e8twldn1l0vdyjlk78hjwefkr86weheu5z6hmfii34bysdyx92jjk9lpik6ijffyczwmep7bzta5c2ogr7ub5mbr8pikgp0bgrdrsxa18jlwtt8b8qsg2jq6dnj9mr2bjh8owgpirxxoac2fj09e011qye8i49lwl8re6184i9o2rf5pny5rz2e9rlksm8guz3z76dl3xnlo8kj2tdfyx8wxvbwhhrvpsfxrvbc8fd1ww929fmffgv28vad0k9pyy2ezcg682zhp3pgrsmyktfugs2v3fhdxg8b5cjy6b0bic3xwqjyqdbifhhd0kz5xpbvnv87h41mhsmgcw7ts == \8\0\n\g\0\5\4\j\d\6\t\9\n\7\q\x\o\p\i\y\k\p\f\t\5\h\e\n\0\3\b\8\o\5\b\r\c\m\n\2\k\z\3\7\7\b\3\4\1\f\w\v\o\x\4\5\o\e\s\u\u\u\s\7\j\k\w\j\6\h\h\h\d\y\a\5\t\1\q\9\d\2\m\5\u\w\5\3\t\f\v\q\6\d\6\u\t\d\w\l\r\0\y\t\8\f\z\n\h\n\p\d\7\q\6\p\d\8\m\z\w\f\2\a\u\h\u\0\m\k\s\3\j\l\j\j\v\e\h\p\7\d\r\9\m\2\k\6\g\b\j\u\k\n\4\w\1\6\2\k\m\9\7\7\2\a\0\x\p\h\y\8\e\8\t\w\l\d\n\1\l\0\v\d\y\j\l\k\7\8\h\j\w\e\f\k\r\8\6\w\e\h\e\u\5\z\6\h\m\f\i\i\3\4\b\y\s\d\y\x\9\2\j\j\k\9\l\p\i\k\6\i\j\f\f\y\c\z\w\m\e\p\7\b\z\t\a\5\c\2\o\g\r\7\u\b\5\m\b\r\8\p\i\k\g\p\0\b\g\r\d\r\s\x\a\1\8\j\l\w\t\t\8\b\8\q\s\g\2\j\q\6\d\n\j\9\m\r\2\b\j\h\8\o\w\g\p\i\r\x\x\o\a\c\2\f\j\0\9\e\0\1\1\q\y\e\8\i\4\9\l\w\l\8\r\e\6\1\8\4\i\9\o\2\r\f\5\p\n\y\5\r\z\2\e\9\r\l\k\s\m\8\g\u\z\3\z\7\6\d\l\3\x\n\l\o\8\k\j\2\t\d\f\y\x\8\w\x\v\b\w\h\h\r\v\p\s\f\x\r\v\b\c\8\f\d\1\w\w\9\2\9\f\m\f\f\g\v\2\8\v\a\d\0\k\9\p\y\y\2\e\z\c\g\6\8\2\z\h\p\3\p\g\r\s\m\y\k\t\f\u\g\s\2\v\3\f\h\d\x\g\8\b\5\c\j\y\6\b\0\b\i\c\3\x\w\q\j\y\q\d\b\i\f\h\h\d\0\k\z\5\x\p\b\v\n\v\8\7\h\4\1\m\h\s\m\g\c\w\7\t\s ]] 00:08:11.049 02:52:49 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.049 02:52:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:11.049 [2024-04-23 02:52:50.035441] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:11.049 [2024-04-23 02:52:50.035529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76737 ] 00:08:11.049 [2024-04-23 02:52:50.149605] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.049 [2024-04-23 02:52:50.159157] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.049 [2024-04-23 02:52:50.191039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.308  Copying: 512/512 [B] (average 500 kBps) 00:08:11.308 00:08:11.308 02:52:50 -- dd/posix.sh@93 -- # [[ 80ng054jd6t9n7qxopiykpft5hen03b8o5brcmn2kz377b341fwvox45oesuuus7jkwj6hhhdya5t1q9d2m5uw53tfvq6d6utdwlr0yt8fznhnpd7q6pd8mzwf2auhu0mks3jljjvehp7dr9m2k6gbjukn4w162km9772a0xphy8e8twldn1l0vdyjlk78hjwefkr86weheu5z6hmfii34bysdyx92jjk9lpik6ijffyczwmep7bzta5c2ogr7ub5mbr8pikgp0bgrdrsxa18jlwtt8b8qsg2jq6dnj9mr2bjh8owgpirxxoac2fj09e011qye8i49lwl8re6184i9o2rf5pny5rz2e9rlksm8guz3z76dl3xnlo8kj2tdfyx8wxvbwhhrvpsfxrvbc8fd1ww929fmffgv28vad0k9pyy2ezcg682zhp3pgrsmyktfugs2v3fhdxg8b5cjy6b0bic3xwqjyqdbifhhd0kz5xpbvnv87h41mhsmgcw7ts == \8\0\n\g\0\5\4\j\d\6\t\9\n\7\q\x\o\p\i\y\k\p\f\t\5\h\e\n\0\3\b\8\o\5\b\r\c\m\n\2\k\z\3\7\7\b\3\4\1\f\w\v\o\x\4\5\o\e\s\u\u\u\s\7\j\k\w\j\6\h\h\h\d\y\a\5\t\1\q\9\d\2\m\5\u\w\5\3\t\f\v\q\6\d\6\u\t\d\w\l\r\0\y\t\8\f\z\n\h\n\p\d\7\q\6\p\d\8\m\z\w\f\2\a\u\h\u\0\m\k\s\3\j\l\j\j\v\e\h\p\7\d\r\9\m\2\k\6\g\b\j\u\k\n\4\w\1\6\2\k\m\9\7\7\2\a\0\x\p\h\y\8\e\8\t\w\l\d\n\1\l\0\v\d\y\j\l\k\7\8\h\j\w\e\f\k\r\8\6\w\e\h\e\u\5\z\6\h\m\f\i\i\3\4\b\y\s\d\y\x\9\2\j\j\k\9\l\p\i\k\6\i\j\f\f\y\c\z\w\m\e\p\7\b\z\t\a\5\c\2\o\g\r\7\u\b\5\m\b\r\8\p\i\k\g\p\0\b\g\r\d\r\s\x\a\1\8\j\l\w\t\t\8\b\8\q\s\g\2\j\q\6\d\n\j\9\m\r\2\b\j\h\8\o\w\g\p\i\r\x\x\o\a\c\2\f\j\0\9\e\0\1\1\q\y\e\8\i\4\9\l\w\l\8\r\e\6\1\8\4\i\9\o\2\r\f\5\p\n\y\5\r\z\2\e\9\r\l\k\s\m\8\g\u\z\3\z\7\6\d\l\3\x\n\l\o\8\k\j\2\t\d\f\y\x\8\w\x\v\b\w\h\h\r\v\p\s\f\x\r\v\b\c\8\f\d\1\w\w\9\2\9\f\m\f\f\g\v\2\8\v\a\d\0\k\9\p\y\y\2\e\z\c\g\6\8\2\z\h\p\3\p\g\r\s\m\y\k\t\f\u\g\s\2\v\3\f\h\d\x\g\8\b\5\c\j\y\6\b\0\b\i\c\3\x\w\q\j\y\q\d\b\i\f\h\h\d\0\k\z\5\x\p\b\v\n\v\8\7\h\4\1\m\h\s\m\g\c\w\7\t\s ]] 00:08:11.308 02:52:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.308 02:52:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:11.308 [2024-04-23 02:52:50.387704] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:11.308 [2024-04-23 02:52:50.387788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76746 ] 00:08:11.568 [2024-04-23 02:52:50.501423] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.568 [2024-04-23 02:52:50.516139] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.568 [2024-04-23 02:52:50.546273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.568  Copying: 512/512 [B] (average 500 kBps) 00:08:11.568 00:08:11.568 02:52:50 -- dd/posix.sh@93 -- # [[ 80ng054jd6t9n7qxopiykpft5hen03b8o5brcmn2kz377b341fwvox45oesuuus7jkwj6hhhdya5t1q9d2m5uw53tfvq6d6utdwlr0yt8fznhnpd7q6pd8mzwf2auhu0mks3jljjvehp7dr9m2k6gbjukn4w162km9772a0xphy8e8twldn1l0vdyjlk78hjwefkr86weheu5z6hmfii34bysdyx92jjk9lpik6ijffyczwmep7bzta5c2ogr7ub5mbr8pikgp0bgrdrsxa18jlwtt8b8qsg2jq6dnj9mr2bjh8owgpirxxoac2fj09e011qye8i49lwl8re6184i9o2rf5pny5rz2e9rlksm8guz3z76dl3xnlo8kj2tdfyx8wxvbwhhrvpsfxrvbc8fd1ww929fmffgv28vad0k9pyy2ezcg682zhp3pgrsmyktfugs2v3fhdxg8b5cjy6b0bic3xwqjyqdbifhhd0kz5xpbvnv87h41mhsmgcw7ts == \8\0\n\g\0\5\4\j\d\6\t\9\n\7\q\x\o\p\i\y\k\p\f\t\5\h\e\n\0\3\b\8\o\5\b\r\c\m\n\2\k\z\3\7\7\b\3\4\1\f\w\v\o\x\4\5\o\e\s\u\u\u\s\7\j\k\w\j\6\h\h\h\d\y\a\5\t\1\q\9\d\2\m\5\u\w\5\3\t\f\v\q\6\d\6\u\t\d\w\l\r\0\y\t\8\f\z\n\h\n\p\d\7\q\6\p\d\8\m\z\w\f\2\a\u\h\u\0\m\k\s\3\j\l\j\j\v\e\h\p\7\d\r\9\m\2\k\6\g\b\j\u\k\n\4\w\1\6\2\k\m\9\7\7\2\a\0\x\p\h\y\8\e\8\t\w\l\d\n\1\l\0\v\d\y\j\l\k\7\8\h\j\w\e\f\k\r\8\6\w\e\h\e\u\5\z\6\h\m\f\i\i\3\4\b\y\s\d\y\x\9\2\j\j\k\9\l\p\i\k\6\i\j\f\f\y\c\z\w\m\e\p\7\b\z\t\a\5\c\2\o\g\r\7\u\b\5\m\b\r\8\p\i\k\g\p\0\b\g\r\d\r\s\x\a\1\8\j\l\w\t\t\8\b\8\q\s\g\2\j\q\6\d\n\j\9\m\r\2\b\j\h\8\o\w\g\p\i\r\x\x\o\a\c\2\f\j\0\9\e\0\1\1\q\y\e\8\i\4\9\l\w\l\8\r\e\6\1\8\4\i\9\o\2\r\f\5\p\n\y\5\r\z\2\e\9\r\l\k\s\m\8\g\u\z\3\z\7\6\d\l\3\x\n\l\o\8\k\j\2\t\d\f\y\x\8\w\x\v\b\w\h\h\r\v\p\s\f\x\r\v\b\c\8\f\d\1\w\w\9\2\9\f\m\f\f\g\v\2\8\v\a\d\0\k\9\p\y\y\2\e\z\c\g\6\8\2\z\h\p\3\p\g\r\s\m\y\k\t\f\u\g\s\2\v\3\f\h\d\x\g\8\b\5\c\j\y\6\b\0\b\i\c\3\x\w\q\j\y\q\d\b\i\f\h\h\d\0\k\z\5\x\p\b\v\n\v\8\7\h\4\1\m\h\s\m\g\c\w\7\t\s ]] 00:08:11.568 02:52:50 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.568 02:52:50 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:11.826 [2024-04-23 02:52:50.734567] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:11.827 [2024-04-23 02:52:50.734651] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76756 ] 00:08:11.827 [2024-04-23 02:52:50.848509] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:11.827 [2024-04-23 02:52:50.857799] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.827 [2024-04-23 02:52:50.887625] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.085  Copying: 512/512 [B] (average 250 kBps) 00:08:12.085 00:08:12.085 02:52:51 -- dd/posix.sh@93 -- # [[ 80ng054jd6t9n7qxopiykpft5hen03b8o5brcmn2kz377b341fwvox45oesuuus7jkwj6hhhdya5t1q9d2m5uw53tfvq6d6utdwlr0yt8fznhnpd7q6pd8mzwf2auhu0mks3jljjvehp7dr9m2k6gbjukn4w162km9772a0xphy8e8twldn1l0vdyjlk78hjwefkr86weheu5z6hmfii34bysdyx92jjk9lpik6ijffyczwmep7bzta5c2ogr7ub5mbr8pikgp0bgrdrsxa18jlwtt8b8qsg2jq6dnj9mr2bjh8owgpirxxoac2fj09e011qye8i49lwl8re6184i9o2rf5pny5rz2e9rlksm8guz3z76dl3xnlo8kj2tdfyx8wxvbwhhrvpsfxrvbc8fd1ww929fmffgv28vad0k9pyy2ezcg682zhp3pgrsmyktfugs2v3fhdxg8b5cjy6b0bic3xwqjyqdbifhhd0kz5xpbvnv87h41mhsmgcw7ts == \8\0\n\g\0\5\4\j\d\6\t\9\n\7\q\x\o\p\i\y\k\p\f\t\5\h\e\n\0\3\b\8\o\5\b\r\c\m\n\2\k\z\3\7\7\b\3\4\1\f\w\v\o\x\4\5\o\e\s\u\u\u\s\7\j\k\w\j\6\h\h\h\d\y\a\5\t\1\q\9\d\2\m\5\u\w\5\3\t\f\v\q\6\d\6\u\t\d\w\l\r\0\y\t\8\f\z\n\h\n\p\d\7\q\6\p\d\8\m\z\w\f\2\a\u\h\u\0\m\k\s\3\j\l\j\j\v\e\h\p\7\d\r\9\m\2\k\6\g\b\j\u\k\n\4\w\1\6\2\k\m\9\7\7\2\a\0\x\p\h\y\8\e\8\t\w\l\d\n\1\l\0\v\d\y\j\l\k\7\8\h\j\w\e\f\k\r\8\6\w\e\h\e\u\5\z\6\h\m\f\i\i\3\4\b\y\s\d\y\x\9\2\j\j\k\9\l\p\i\k\6\i\j\f\f\y\c\z\w\m\e\p\7\b\z\t\a\5\c\2\o\g\r\7\u\b\5\m\b\r\8\p\i\k\g\p\0\b\g\r\d\r\s\x\a\1\8\j\l\w\t\t\8\b\8\q\s\g\2\j\q\6\d\n\j\9\m\r\2\b\j\h\8\o\w\g\p\i\r\x\x\o\a\c\2\f\j\0\9\e\0\1\1\q\y\e\8\i\4\9\l\w\l\8\r\e\6\1\8\4\i\9\o\2\r\f\5\p\n\y\5\r\z\2\e\9\r\l\k\s\m\8\g\u\z\3\z\7\6\d\l\3\x\n\l\o\8\k\j\2\t\d\f\y\x\8\w\x\v\b\w\h\h\r\v\p\s\f\x\r\v\b\c\8\f\d\1\w\w\9\2\9\f\m\f\f\g\v\2\8\v\a\d\0\k\9\p\y\y\2\e\z\c\g\6\8\2\z\h\p\3\p\g\r\s\m\y\k\t\f\u\g\s\2\v\3\f\h\d\x\g\8\b\5\c\j\y\6\b\0\b\i\c\3\x\w\q\j\y\q\d\b\i\f\h\h\d\0\k\z\5\x\p\b\v\n\v\8\7\h\4\1\m\h\s\m\g\c\w\7\t\s ]] 00:08:12.085 00:08:12.085 real 0m2.922s 00:08:12.085 user 0m1.401s 00:08:12.085 sys 0m1.275s 00:08:12.085 02:52:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:12.085 ************************************ 00:08:12.085 END TEST dd_flags_misc 00:08:12.085 ************************************ 00:08:12.085 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.085 02:52:51 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:12.085 02:52:51 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:12.085 * Second test run, disabling liburing, forcing AIO 00:08:12.085 02:52:51 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:12.085 02:52:51 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:12.085 02:52:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.085 02:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.085 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.085 ************************************ 00:08:12.085 START TEST dd_flag_append_forced_aio 00:08:12.085 ************************************ 00:08:12.085 02:52:51 -- common/autotest_common.sh@1111 -- # append 00:08:12.085 02:52:51 -- dd/posix.sh@16 -- # local dump0 00:08:12.085 02:52:51 -- dd/posix.sh@17 -- # local dump1 00:08:12.085 02:52:51 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:12.085 02:52:51 -- dd/common.sh@98 -- # xtrace_disable 00:08:12.085 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.085 02:52:51 -- dd/posix.sh@19 -- # dump0=k61e448rcga3f9jz4g7ds7xvc95wvw8x 00:08:12.085 02:52:51 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:12.085 02:52:51 -- dd/common.sh@98 -- # xtrace_disable 00:08:12.085 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.085 02:52:51 -- dd/posix.sh@20 -- # dump1=e7apo6f4ilbz2fo5iog3nqsgrq3wgm7f 00:08:12.085 02:52:51 -- dd/posix.sh@22 -- # printf %s k61e448rcga3f9jz4g7ds7xvc95wvw8x 00:08:12.085 02:52:51 -- dd/posix.sh@23 -- # printf %s e7apo6f4ilbz2fo5iog3nqsgrq3wgm7f 00:08:12.085 02:52:51 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:12.085 [2024-04-23 02:52:51.232352] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:12.085 [2024-04-23 02:52:51.232448] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76783 ] 00:08:12.344 [2024-04-23 02:52:51.353378] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.344 [2024-04-23 02:52:51.369932] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.344 [2024-04-23 02:52:51.405174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.602  Copying: 32/32 [B] (average 31 kBps) 00:08:12.602 00:08:12.602 02:52:51 -- dd/posix.sh@27 -- # [[ e7apo6f4ilbz2fo5iog3nqsgrq3wgm7fk61e448rcga3f9jz4g7ds7xvc95wvw8x == \e\7\a\p\o\6\f\4\i\l\b\z\2\f\o\5\i\o\g\3\n\q\s\g\r\q\3\w\g\m\7\f\k\6\1\e\4\4\8\r\c\g\a\3\f\9\j\z\4\g\7\d\s\7\x\v\c\9\5\w\v\w\8\x ]] 00:08:12.602 00:08:12.602 real 0m0.420s 00:08:12.602 user 0m0.194s 00:08:12.602 sys 0m0.101s 00:08:12.602 ************************************ 00:08:12.602 END TEST dd_flag_append_forced_aio 00:08:12.602 ************************************ 00:08:12.602 02:52:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:12.602 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.602 02:52:51 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:12.602 02:52:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:12.602 02:52:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.602 02:52:51 -- common/autotest_common.sh@10 -- # set +x 00:08:12.602 ************************************ 00:08:12.602 START TEST dd_flag_directory_forced_aio 00:08:12.602 ************************************ 00:08:12.602 02:52:51 -- common/autotest_common.sh@1111 -- # directory 00:08:12.602 02:52:51 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.602 02:52:51 -- common/autotest_common.sh@638 -- # local es=0 00:08:12.602 02:52:51 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.602 02:52:51 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.602 02:52:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.602 02:52:51 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.602 02:52:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.602 02:52:51 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.602 02:52:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:12.602 02:52:51 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:12.602 02:52:51 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:12.602 02:52:51 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:12.860 [2024-04-23 02:52:51.761527] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:12.860 [2024-04-23 02:52:51.761615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76819 ] 00:08:12.860 [2024-04-23 02:52:51.881948] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:12.860 [2024-04-23 02:52:51.898693] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.860 [2024-04-23 02:52:51.928478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.860 [2024-04-23 02:52:51.969648] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.860 [2024-04-23 02:52:51.969731] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:12.860 [2024-04-23 02:52:51.969763] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.118 [2024-04-23 02:52:52.025410] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:13.118 02:52:52 -- common/autotest_common.sh@641 -- # es=236 00:08:13.118 02:52:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:13.118 02:52:52 -- common/autotest_common.sh@650 -- # es=108 00:08:13.118 02:52:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:13.118 02:52:52 -- common/autotest_common.sh@658 -- # es=1 00:08:13.118 02:52:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:13.118 02:52:52 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.118 02:52:52 -- common/autotest_common.sh@638 -- # local es=0 00:08:13.118 02:52:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.118 02:52:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.118 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.118 02:52:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.118 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.118 02:52:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.118 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.118 02:52:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.118 02:52:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.118 02:52:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:13.118 [2024-04-23 02:52:52.139321] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:13.118 [2024-04-23 02:52:52.139422] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76823 ] 00:08:13.118 [2024-04-23 02:52:52.259975] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.377 [2024-04-23 02:52:52.278561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.377 [2024-04-23 02:52:52.309996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.377 [2024-04-23 02:52:52.353755] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.377 [2024-04-23 02:52:52.353815] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:13.377 [2024-04-23 02:52:52.353848] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.377 [2024-04-23 02:52:52.408612] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:13.377 02:52:52 -- common/autotest_common.sh@641 -- # es=236 00:08:13.377 02:52:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:13.377 02:52:52 -- common/autotest_common.sh@650 -- # es=108 00:08:13.377 02:52:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:13.377 02:52:52 -- common/autotest_common.sh@658 -- # es=1 00:08:13.377 02:52:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:13.377 00:08:13.377 real 0m0.762s 00:08:13.377 user 0m0.361s 00:08:13.377 sys 0m0.188s 00:08:13.377 02:52:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:13.377 02:52:52 -- common/autotest_common.sh@10 -- # set +x 00:08:13.377 ************************************ 00:08:13.377 END TEST dd_flag_directory_forced_aio 00:08:13.377 ************************************ 00:08:13.377 02:52:52 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:13.377 02:52:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:13.377 02:52:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:13.377 02:52:52 -- common/autotest_common.sh@10 -- # set +x 00:08:13.636 ************************************ 00:08:13.636 START TEST dd_flag_nofollow_forced_aio 00:08:13.636 ************************************ 00:08:13.636 02:52:52 -- common/autotest_common.sh@1111 -- # nofollow 00:08:13.636 02:52:52 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.636 02:52:52 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.636 02:52:52 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.636 02:52:52 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.636 02:52:52 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.636 02:52:52 -- common/autotest_common.sh@638 -- # local es=0 00:08:13.636 02:52:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.636 02:52:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.636 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.636 02:52:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.636 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.636 02:52:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.636 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.636 02:52:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.636 02:52:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.636 02:52:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:13.636 [2024-04-23 02:52:52.650640] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:13.636 [2024-04-23 02:52:52.650740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76861 ] 00:08:13.636 [2024-04-23 02:52:52.771849] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:13.636 [2024-04-23 02:52:52.790777] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.895 [2024-04-23 02:52:52.822889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.895 [2024-04-23 02:52:52.861761] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.895 [2024-04-23 02:52:52.861811] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:13.895 [2024-04-23 02:52:52.861842] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:13.895 [2024-04-23 02:52:52.922629] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:13.895 02:52:52 -- common/autotest_common.sh@641 -- # es=216 00:08:13.895 02:52:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:13.895 02:52:52 -- common/autotest_common.sh@650 -- # es=88 00:08:13.895 02:52:52 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:13.896 02:52:52 -- common/autotest_common.sh@658 -- # es=1 00:08:13.896 02:52:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:13.896 02:52:52 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.896 02:52:52 -- common/autotest_common.sh@638 -- # local es=0 00:08:13.896 02:52:52 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.896 02:52:52 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.896 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.896 02:52:52 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.896 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.896 02:52:52 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.896 02:52:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:13.896 02:52:52 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:13.896 02:52:52 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:13.896 02:52:52 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:13.896 [2024-04-23 02:52:53.045330] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:13.896 [2024-04-23 02:52:53.045446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76865 ] 00:08:14.156 [2024-04-23 02:52:53.166260] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.156 [2024-04-23 02:52:53.183354] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.156 [2024-04-23 02:52:53.213705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.156 [2024-04-23 02:52:53.253970] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.156 [2024-04-23 02:52:53.254037] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:14.156 [2024-04-23 02:52:53.254070] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:14.156 [2024-04-23 02:52:53.309205] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:14.414 02:52:53 -- common/autotest_common.sh@641 -- # es=216 00:08:14.414 02:52:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:14.414 02:52:53 -- common/autotest_common.sh@650 -- # es=88 00:08:14.414 02:52:53 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:14.414 02:52:53 -- common/autotest_common.sh@658 -- # es=1 00:08:14.414 02:52:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:14.414 02:52:53 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:14.414 02:52:53 -- dd/common.sh@98 -- # xtrace_disable 00:08:14.414 02:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.414 02:52:53 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.414 [2024-04-23 02:52:53.417343] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:14.414 [2024-04-23 02:52:53.417430] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76871 ] 00:08:14.414 [2024-04-23 02:52:53.531265] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:14.414 [2024-04-23 02:52:53.540943] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.680 [2024-04-23 02:52:53.571605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.680  Copying: 512/512 [B] (average 500 kBps) 00:08:14.680 00:08:14.680 02:52:53 -- dd/posix.sh@49 -- # [[ lu3nk1v54v11n1m2n90dy73iny499fjghzyi1ack8rvm1jiqgipahkgup04f5qjciqm25li9y6ta4ns2pjcbi3piuxaceu4cxn0kpfpf47xt8gtbq7wlfoij5tuigr6ugxhmxdanjqfvt9sgvezusi89ihmux9wcxiogwvhjlfstkagczlxw8ro3jtbeln8lxd63uckbxa04igslzntpiuajr1kji9xydcxuwqe9vw3c7ryiwbiregk447kxkpjw0ct19feu313ggc48qbu6z8i3nxlddfulwu3crpvvi1ybnnx194dmum58s4s7sh3zfwie0kqp9bibzrweo6sttr8r50xj531ocfef63p6khj2fdkvvmpkzmy93pt8ow0c8xpfxr9x4cp4nwnjwn9eoe6d9nxle3bq5vnc8nbdptqtziotsgfza1a453am1ko5pxr1vauczcqlm56czdef07sq6ze8jzzcp4wlaz37ewi0hybikny56xhvzg9eyywg == \l\u\3\n\k\1\v\5\4\v\1\1\n\1\m\2\n\9\0\d\y\7\3\i\n\y\4\9\9\f\j\g\h\z\y\i\1\a\c\k\8\r\v\m\1\j\i\q\g\i\p\a\h\k\g\u\p\0\4\f\5\q\j\c\i\q\m\2\5\l\i\9\y\6\t\a\4\n\s\2\p\j\c\b\i\3\p\i\u\x\a\c\e\u\4\c\x\n\0\k\p\f\p\f\4\7\x\t\8\g\t\b\q\7\w\l\f\o\i\j\5\t\u\i\g\r\6\u\g\x\h\m\x\d\a\n\j\q\f\v\t\9\s\g\v\e\z\u\s\i\8\9\i\h\m\u\x\9\w\c\x\i\o\g\w\v\h\j\l\f\s\t\k\a\g\c\z\l\x\w\8\r\o\3\j\t\b\e\l\n\8\l\x\d\6\3\u\c\k\b\x\a\0\4\i\g\s\l\z\n\t\p\i\u\a\j\r\1\k\j\i\9\x\y\d\c\x\u\w\q\e\9\v\w\3\c\7\r\y\i\w\b\i\r\e\g\k\4\4\7\k\x\k\p\j\w\0\c\t\1\9\f\e\u\3\1\3\g\g\c\4\8\q\b\u\6\z\8\i\3\n\x\l\d\d\f\u\l\w\u\3\c\r\p\v\v\i\1\y\b\n\n\x\1\9\4\d\m\u\m\5\8\s\4\s\7\s\h\3\z\f\w\i\e\0\k\q\p\9\b\i\b\z\r\w\e\o\6\s\t\t\r\8\r\5\0\x\j\5\3\1\o\c\f\e\f\6\3\p\6\k\h\j\2\f\d\k\v\v\m\p\k\z\m\y\9\3\p\t\8\o\w\0\c\8\x\p\f\x\r\9\x\4\c\p\4\n\w\n\j\w\n\9\e\o\e\6\d\9\n\x\l\e\3\b\q\5\v\n\c\8\n\b\d\p\t\q\t\z\i\o\t\s\g\f\z\a\1\a\4\5\3\a\m\1\k\o\5\p\x\r\1\v\a\u\c\z\c\q\l\m\5\6\c\z\d\e\f\0\7\s\q\6\z\e\8\j\z\z\c\p\4\w\l\a\z\3\7\e\w\i\0\h\y\b\i\k\n\y\5\6\x\h\v\z\g\9\e\y\y\w\g ]] 00:08:14.680 00:08:14.680 real 0m1.160s 00:08:14.680 user 0m0.559s 00:08:14.680 sys 0m0.269s 00:08:14.680 02:52:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:14.680 ************************************ 00:08:14.680 END TEST dd_flag_nofollow_forced_aio 00:08:14.680 02:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.680 ************************************ 00:08:14.680 02:52:53 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:14.680 02:52:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:14.680 02:52:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.680 02:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.950 ************************************ 00:08:14.950 START TEST dd_flag_noatime_forced_aio 00:08:14.950 ************************************ 00:08:14.950 02:52:53 -- common/autotest_common.sh@1111 -- # noatime 00:08:14.950 02:52:53 -- dd/posix.sh@53 -- # local atime_if 00:08:14.950 02:52:53 -- dd/posix.sh@54 -- # local atime_of 00:08:14.950 02:52:53 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:14.950 02:52:53 -- dd/common.sh@98 -- # xtrace_disable 00:08:14.950 02:52:53 -- common/autotest_common.sh@10 -- # set +x 00:08:14.950 02:52:53 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.950 02:52:53 -- dd/posix.sh@60 -- # atime_if=1713840773 00:08:14.950 02:52:53 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.950 02:52:53 -- dd/posix.sh@61 -- # atime_of=1713840773 00:08:14.950 02:52:53 -- dd/posix.sh@66 -- # sleep 1 00:08:15.887 02:52:54 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:15.887 [2024-04-23 02:52:54.928940] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:15.887 [2024-04-23 02:52:54.929040] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76917 ] 00:08:15.887 [2024-04-23 02:52:55.043003] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.145 [2024-04-23 02:52:55.059028] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.145 [2024-04-23 02:52:55.098381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.145  Copying: 512/512 [B] (average 500 kBps) 00:08:16.145 00:08:16.145 02:52:55 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.405 02:52:55 -- dd/posix.sh@69 -- # (( atime_if == 1713840773 )) 00:08:16.405 02:52:55 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.405 02:52:55 -- dd/posix.sh@70 -- # (( atime_of == 1713840773 )) 00:08:16.405 02:52:55 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.405 [2024-04-23 02:52:55.353451] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:16.405 [2024-04-23 02:52:55.353546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76923 ] 00:08:16.405 [2024-04-23 02:52:55.473731] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.405 [2024-04-23 02:52:55.494026] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.405 [2024-04-23 02:52:55.535041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.665  Copying: 512/512 [B] (average 500 kBps) 00:08:16.665 00:08:16.665 02:52:55 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.665 02:52:55 -- dd/posix.sh@73 -- # (( atime_if < 1713840775 )) 00:08:16.665 00:08:16.665 real 0m1.880s 00:08:16.665 user 0m0.440s 00:08:16.665 sys 0m0.199s 00:08:16.665 02:52:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:16.665 02:52:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.665 ************************************ 00:08:16.665 END TEST dd_flag_noatime_forced_aio 00:08:16.665 ************************************ 00:08:16.665 02:52:55 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:16.665 02:52:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:16.665 02:52:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:16.665 02:52:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.924 ************************************ 00:08:16.924 START TEST dd_flags_misc_forced_aio 00:08:16.924 ************************************ 00:08:16.924 02:52:55 -- common/autotest_common.sh@1111 -- # io 00:08:16.924 02:52:55 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:16.924 02:52:55 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:16.924 02:52:55 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:16.924 02:52:55 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:16.924 02:52:55 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:16.924 02:52:55 -- dd/common.sh@98 -- # xtrace_disable 00:08:16.924 02:52:55 -- common/autotest_common.sh@10 -- # set +x 00:08:16.924 02:52:55 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.924 02:52:55 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:16.924 [2024-04-23 02:52:55.911169] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:16.924 [2024-04-23 02:52:55.911265] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76959 ] 00:08:16.924 [2024-04-23 02:52:56.032487] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:16.924 [2024-04-23 02:52:56.050039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.183 [2024-04-23 02:52:56.086323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.183  Copying: 512/512 [B] (average 500 kBps) 00:08:17.183 00:08:17.183 02:52:56 -- dd/posix.sh@93 -- # [[ 0i246am8h2eet3vrtfdrbxgc6bh3rezogesvofn6xd4oihpysexpsrabc4qhrnl8g09s66l2k2wdfv0k36lq76ghrx7tqhp6qj3p5rv6tfzawbk14am372i0gc0yy4oa23n2jislc9e7j7dr8bq1xxyh81iobxy0ghvipza9bx72ca2ufl1b7pff2yu0iuki5k39ygg12vn7myx795xb3isait5n66xni2jky0to3qbqeg8wh9ods5wba2777f3lthr6hurmynruja3d9afwdkl4mkwji9uushvafaxtoat2bzzc131sscgcemqcb8q6trukt7l5ulyyi9sc28042zjlpdskmkooehnm8fr0ayu3h982btx20sespuo19w59zvl5apyjmu7p8laxlvbiz4x0qrsrnj31c1gu8ayv68j9xo3kywhtxge9ay05p5zovviicq48t42n3eajc02jqekiw3o6pm1vv0wcsgq910gt1mev4bt9qc06dyctkix7 == \0\i\2\4\6\a\m\8\h\2\e\e\t\3\v\r\t\f\d\r\b\x\g\c\6\b\h\3\r\e\z\o\g\e\s\v\o\f\n\6\x\d\4\o\i\h\p\y\s\e\x\p\s\r\a\b\c\4\q\h\r\n\l\8\g\0\9\s\6\6\l\2\k\2\w\d\f\v\0\k\3\6\l\q\7\6\g\h\r\x\7\t\q\h\p\6\q\j\3\p\5\r\v\6\t\f\z\a\w\b\k\1\4\a\m\3\7\2\i\0\g\c\0\y\y\4\o\a\2\3\n\2\j\i\s\l\c\9\e\7\j\7\d\r\8\b\q\1\x\x\y\h\8\1\i\o\b\x\y\0\g\h\v\i\p\z\a\9\b\x\7\2\c\a\2\u\f\l\1\b\7\p\f\f\2\y\u\0\i\u\k\i\5\k\3\9\y\g\g\1\2\v\n\7\m\y\x\7\9\5\x\b\3\i\s\a\i\t\5\n\6\6\x\n\i\2\j\k\y\0\t\o\3\q\b\q\e\g\8\w\h\9\o\d\s\5\w\b\a\2\7\7\7\f\3\l\t\h\r\6\h\u\r\m\y\n\r\u\j\a\3\d\9\a\f\w\d\k\l\4\m\k\w\j\i\9\u\u\s\h\v\a\f\a\x\t\o\a\t\2\b\z\z\c\1\3\1\s\s\c\g\c\e\m\q\c\b\8\q\6\t\r\u\k\t\7\l\5\u\l\y\y\i\9\s\c\2\8\0\4\2\z\j\l\p\d\s\k\m\k\o\o\e\h\n\m\8\f\r\0\a\y\u\3\h\9\8\2\b\t\x\2\0\s\e\s\p\u\o\1\9\w\5\9\z\v\l\5\a\p\y\j\m\u\7\p\8\l\a\x\l\v\b\i\z\4\x\0\q\r\s\r\n\j\3\1\c\1\g\u\8\a\y\v\6\8\j\9\x\o\3\k\y\w\h\t\x\g\e\9\a\y\0\5\p\5\z\o\v\v\i\i\c\q\4\8\t\4\2\n\3\e\a\j\c\0\2\j\q\e\k\i\w\3\o\6\p\m\1\v\v\0\w\c\s\g\q\9\1\0\g\t\1\m\e\v\4\b\t\9\q\c\0\6\d\y\c\t\k\i\x\7 ]] 00:08:17.183 02:52:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.183 02:52:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:17.183 [2024-04-23 02:52:56.304450] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:17.183 [2024-04-23 02:52:56.304565] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76961 ] 00:08:17.442 [2024-04-23 02:52:56.418957] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.442 [2024-04-23 02:52:56.428437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.442 [2024-04-23 02:52:56.458828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.702  Copying: 512/512 [B] (average 500 kBps) 00:08:17.702 00:08:17.702 02:52:56 -- dd/posix.sh@93 -- # [[ 0i246am8h2eet3vrtfdrbxgc6bh3rezogesvofn6xd4oihpysexpsrabc4qhrnl8g09s66l2k2wdfv0k36lq76ghrx7tqhp6qj3p5rv6tfzawbk14am372i0gc0yy4oa23n2jislc9e7j7dr8bq1xxyh81iobxy0ghvipza9bx72ca2ufl1b7pff2yu0iuki5k39ygg12vn7myx795xb3isait5n66xni2jky0to3qbqeg8wh9ods5wba2777f3lthr6hurmynruja3d9afwdkl4mkwji9uushvafaxtoat2bzzc131sscgcemqcb8q6trukt7l5ulyyi9sc28042zjlpdskmkooehnm8fr0ayu3h982btx20sespuo19w59zvl5apyjmu7p8laxlvbiz4x0qrsrnj31c1gu8ayv68j9xo3kywhtxge9ay05p5zovviicq48t42n3eajc02jqekiw3o6pm1vv0wcsgq910gt1mev4bt9qc06dyctkix7 == \0\i\2\4\6\a\m\8\h\2\e\e\t\3\v\r\t\f\d\r\b\x\g\c\6\b\h\3\r\e\z\o\g\e\s\v\o\f\n\6\x\d\4\o\i\h\p\y\s\e\x\p\s\r\a\b\c\4\q\h\r\n\l\8\g\0\9\s\6\6\l\2\k\2\w\d\f\v\0\k\3\6\l\q\7\6\g\h\r\x\7\t\q\h\p\6\q\j\3\p\5\r\v\6\t\f\z\a\w\b\k\1\4\a\m\3\7\2\i\0\g\c\0\y\y\4\o\a\2\3\n\2\j\i\s\l\c\9\e\7\j\7\d\r\8\b\q\1\x\x\y\h\8\1\i\o\b\x\y\0\g\h\v\i\p\z\a\9\b\x\7\2\c\a\2\u\f\l\1\b\7\p\f\f\2\y\u\0\i\u\k\i\5\k\3\9\y\g\g\1\2\v\n\7\m\y\x\7\9\5\x\b\3\i\s\a\i\t\5\n\6\6\x\n\i\2\j\k\y\0\t\o\3\q\b\q\e\g\8\w\h\9\o\d\s\5\w\b\a\2\7\7\7\f\3\l\t\h\r\6\h\u\r\m\y\n\r\u\j\a\3\d\9\a\f\w\d\k\l\4\m\k\w\j\i\9\u\u\s\h\v\a\f\a\x\t\o\a\t\2\b\z\z\c\1\3\1\s\s\c\g\c\e\m\q\c\b\8\q\6\t\r\u\k\t\7\l\5\u\l\y\y\i\9\s\c\2\8\0\4\2\z\j\l\p\d\s\k\m\k\o\o\e\h\n\m\8\f\r\0\a\y\u\3\h\9\8\2\b\t\x\2\0\s\e\s\p\u\o\1\9\w\5\9\z\v\l\5\a\p\y\j\m\u\7\p\8\l\a\x\l\v\b\i\z\4\x\0\q\r\s\r\n\j\3\1\c\1\g\u\8\a\y\v\6\8\j\9\x\o\3\k\y\w\h\t\x\g\e\9\a\y\0\5\p\5\z\o\v\v\i\i\c\q\4\8\t\4\2\n\3\e\a\j\c\0\2\j\q\e\k\i\w\3\o\6\p\m\1\v\v\0\w\c\s\g\q\9\1\0\g\t\1\m\e\v\4\b\t\9\q\c\0\6\d\y\c\t\k\i\x\7 ]] 00:08:17.702 02:52:56 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.702 02:52:56 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:17.702 [2024-04-23 02:52:56.700353] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:17.702 [2024-04-23 02:52:56.700447] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76974 ] 00:08:17.702 [2024-04-23 02:52:56.820253] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:17.702 [2024-04-23 02:52:56.834229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.962 [2024-04-23 02:52:56.864842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.962  Copying: 512/512 [B] (average 250 kBps) 00:08:17.962 00:08:17.962 02:52:57 -- dd/posix.sh@93 -- # [[ 0i246am8h2eet3vrtfdrbxgc6bh3rezogesvofn6xd4oihpysexpsrabc4qhrnl8g09s66l2k2wdfv0k36lq76ghrx7tqhp6qj3p5rv6tfzawbk14am372i0gc0yy4oa23n2jislc9e7j7dr8bq1xxyh81iobxy0ghvipza9bx72ca2ufl1b7pff2yu0iuki5k39ygg12vn7myx795xb3isait5n66xni2jky0to3qbqeg8wh9ods5wba2777f3lthr6hurmynruja3d9afwdkl4mkwji9uushvafaxtoat2bzzc131sscgcemqcb8q6trukt7l5ulyyi9sc28042zjlpdskmkooehnm8fr0ayu3h982btx20sespuo19w59zvl5apyjmu7p8laxlvbiz4x0qrsrnj31c1gu8ayv68j9xo3kywhtxge9ay05p5zovviicq48t42n3eajc02jqekiw3o6pm1vv0wcsgq910gt1mev4bt9qc06dyctkix7 == \0\i\2\4\6\a\m\8\h\2\e\e\t\3\v\r\t\f\d\r\b\x\g\c\6\b\h\3\r\e\z\o\g\e\s\v\o\f\n\6\x\d\4\o\i\h\p\y\s\e\x\p\s\r\a\b\c\4\q\h\r\n\l\8\g\0\9\s\6\6\l\2\k\2\w\d\f\v\0\k\3\6\l\q\7\6\g\h\r\x\7\t\q\h\p\6\q\j\3\p\5\r\v\6\t\f\z\a\w\b\k\1\4\a\m\3\7\2\i\0\g\c\0\y\y\4\o\a\2\3\n\2\j\i\s\l\c\9\e\7\j\7\d\r\8\b\q\1\x\x\y\h\8\1\i\o\b\x\y\0\g\h\v\i\p\z\a\9\b\x\7\2\c\a\2\u\f\l\1\b\7\p\f\f\2\y\u\0\i\u\k\i\5\k\3\9\y\g\g\1\2\v\n\7\m\y\x\7\9\5\x\b\3\i\s\a\i\t\5\n\6\6\x\n\i\2\j\k\y\0\t\o\3\q\b\q\e\g\8\w\h\9\o\d\s\5\w\b\a\2\7\7\7\f\3\l\t\h\r\6\h\u\r\m\y\n\r\u\j\a\3\d\9\a\f\w\d\k\l\4\m\k\w\j\i\9\u\u\s\h\v\a\f\a\x\t\o\a\t\2\b\z\z\c\1\3\1\s\s\c\g\c\e\m\q\c\b\8\q\6\t\r\u\k\t\7\l\5\u\l\y\y\i\9\s\c\2\8\0\4\2\z\j\l\p\d\s\k\m\k\o\o\e\h\n\m\8\f\r\0\a\y\u\3\h\9\8\2\b\t\x\2\0\s\e\s\p\u\o\1\9\w\5\9\z\v\l\5\a\p\y\j\m\u\7\p\8\l\a\x\l\v\b\i\z\4\x\0\q\r\s\r\n\j\3\1\c\1\g\u\8\a\y\v\6\8\j\9\x\o\3\k\y\w\h\t\x\g\e\9\a\y\0\5\p\5\z\o\v\v\i\i\c\q\4\8\t\4\2\n\3\e\a\j\c\0\2\j\q\e\k\i\w\3\o\6\p\m\1\v\v\0\w\c\s\g\q\9\1\0\g\t\1\m\e\v\4\b\t\9\q\c\0\6\d\y\c\t\k\i\x\7 ]] 00:08:17.962 02:52:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:17.962 02:52:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:17.962 [2024-04-23 02:52:57.081857] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:17.962 [2024-04-23 02:52:57.081977] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76976 ] 00:08:18.221 [2024-04-23 02:52:57.203924] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.221 [2024-04-23 02:52:57.223053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.221 [2024-04-23 02:52:57.260860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.481  Copying: 512/512 [B] (average 500 kBps) 00:08:18.481 00:08:18.481 02:52:57 -- dd/posix.sh@93 -- # [[ 0i246am8h2eet3vrtfdrbxgc6bh3rezogesvofn6xd4oihpysexpsrabc4qhrnl8g09s66l2k2wdfv0k36lq76ghrx7tqhp6qj3p5rv6tfzawbk14am372i0gc0yy4oa23n2jislc9e7j7dr8bq1xxyh81iobxy0ghvipza9bx72ca2ufl1b7pff2yu0iuki5k39ygg12vn7myx795xb3isait5n66xni2jky0to3qbqeg8wh9ods5wba2777f3lthr6hurmynruja3d9afwdkl4mkwji9uushvafaxtoat2bzzc131sscgcemqcb8q6trukt7l5ulyyi9sc28042zjlpdskmkooehnm8fr0ayu3h982btx20sespuo19w59zvl5apyjmu7p8laxlvbiz4x0qrsrnj31c1gu8ayv68j9xo3kywhtxge9ay05p5zovviicq48t42n3eajc02jqekiw3o6pm1vv0wcsgq910gt1mev4bt9qc06dyctkix7 == \0\i\2\4\6\a\m\8\h\2\e\e\t\3\v\r\t\f\d\r\b\x\g\c\6\b\h\3\r\e\z\o\g\e\s\v\o\f\n\6\x\d\4\o\i\h\p\y\s\e\x\p\s\r\a\b\c\4\q\h\r\n\l\8\g\0\9\s\6\6\l\2\k\2\w\d\f\v\0\k\3\6\l\q\7\6\g\h\r\x\7\t\q\h\p\6\q\j\3\p\5\r\v\6\t\f\z\a\w\b\k\1\4\a\m\3\7\2\i\0\g\c\0\y\y\4\o\a\2\3\n\2\j\i\s\l\c\9\e\7\j\7\d\r\8\b\q\1\x\x\y\h\8\1\i\o\b\x\y\0\g\h\v\i\p\z\a\9\b\x\7\2\c\a\2\u\f\l\1\b\7\p\f\f\2\y\u\0\i\u\k\i\5\k\3\9\y\g\g\1\2\v\n\7\m\y\x\7\9\5\x\b\3\i\s\a\i\t\5\n\6\6\x\n\i\2\j\k\y\0\t\o\3\q\b\q\e\g\8\w\h\9\o\d\s\5\w\b\a\2\7\7\7\f\3\l\t\h\r\6\h\u\r\m\y\n\r\u\j\a\3\d\9\a\f\w\d\k\l\4\m\k\w\j\i\9\u\u\s\h\v\a\f\a\x\t\o\a\t\2\b\z\z\c\1\3\1\s\s\c\g\c\e\m\q\c\b\8\q\6\t\r\u\k\t\7\l\5\u\l\y\y\i\9\s\c\2\8\0\4\2\z\j\l\p\d\s\k\m\k\o\o\e\h\n\m\8\f\r\0\a\y\u\3\h\9\8\2\b\t\x\2\0\s\e\s\p\u\o\1\9\w\5\9\z\v\l\5\a\p\y\j\m\u\7\p\8\l\a\x\l\v\b\i\z\4\x\0\q\r\s\r\n\j\3\1\c\1\g\u\8\a\y\v\6\8\j\9\x\o\3\k\y\w\h\t\x\g\e\9\a\y\0\5\p\5\z\o\v\v\i\i\c\q\4\8\t\4\2\n\3\e\a\j\c\0\2\j\q\e\k\i\w\3\o\6\p\m\1\v\v\0\w\c\s\g\q\9\1\0\g\t\1\m\e\v\4\b\t\9\q\c\0\6\d\y\c\t\k\i\x\7 ]] 00:08:18.481 02:52:57 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:18.481 02:52:57 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:18.481 02:52:57 -- dd/common.sh@98 -- # xtrace_disable 00:08:18.481 02:52:57 -- common/autotest_common.sh@10 -- # set +x 00:08:18.481 02:52:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.481 02:52:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:18.481 [2024-04-23 02:52:57.497950] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:18.481 [2024-04-23 02:52:57.498055] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76978 ] 00:08:18.481 [2024-04-23 02:52:57.619284] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:18.481 [2024-04-23 02:52:57.637974] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.740 [2024-04-23 02:52:57.668937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.740  Copying: 512/512 [B] (average 500 kBps) 00:08:18.740 00:08:18.740 02:52:57 -- dd/posix.sh@93 -- # [[ cvvoqgryseky4pfz45ymcmm0tyqiqlatreeru8i1612e5svi8iy335b0k0zvjlrk5ocrud2s1z9ao1kf12r9f60r7fzwntccvt9ztauk3bwwlpukr0ecnp60h5cc46rwq73i0uvy24m8e64be2blw308vpwhbzsaxoh0p31gaazo9i224phqpfxytjvoosci34o2nwcaez2x1egig7d0t1b1cq5hsstu83wh2dnlr8hzimfzcy4eua05413ylhsxxmdwqz0l4k148n4ome0i0o6k53j0kxid7gbrdjxzot7utomzrot574qb7fvt9nukja2s1htw5yzdmblo78wupkm4hgwp70aqqqdsosmfemlo4xdj7yzas88u6e4v057ndxau04fbsiqtwklxrwsebvu38gj03zm54v1puwtzjg0gl84rnng1cfd0cucwqg505rah1yvek7p29iuseyv5p6x59gongqnqrooaxgoprctst3jzjvfmsrtw29ui2w3i == \c\v\v\o\q\g\r\y\s\e\k\y\4\p\f\z\4\5\y\m\c\m\m\0\t\y\q\i\q\l\a\t\r\e\e\r\u\8\i\1\6\1\2\e\5\s\v\i\8\i\y\3\3\5\b\0\k\0\z\v\j\l\r\k\5\o\c\r\u\d\2\s\1\z\9\a\o\1\k\f\1\2\r\9\f\6\0\r\7\f\z\w\n\t\c\c\v\t\9\z\t\a\u\k\3\b\w\w\l\p\u\k\r\0\e\c\n\p\6\0\h\5\c\c\4\6\r\w\q\7\3\i\0\u\v\y\2\4\m\8\e\6\4\b\e\2\b\l\w\3\0\8\v\p\w\h\b\z\s\a\x\o\h\0\p\3\1\g\a\a\z\o\9\i\2\2\4\p\h\q\p\f\x\y\t\j\v\o\o\s\c\i\3\4\o\2\n\w\c\a\e\z\2\x\1\e\g\i\g\7\d\0\t\1\b\1\c\q\5\h\s\s\t\u\8\3\w\h\2\d\n\l\r\8\h\z\i\m\f\z\c\y\4\e\u\a\0\5\4\1\3\y\l\h\s\x\x\m\d\w\q\z\0\l\4\k\1\4\8\n\4\o\m\e\0\i\0\o\6\k\5\3\j\0\k\x\i\d\7\g\b\r\d\j\x\z\o\t\7\u\t\o\m\z\r\o\t\5\7\4\q\b\7\f\v\t\9\n\u\k\j\a\2\s\1\h\t\w\5\y\z\d\m\b\l\o\7\8\w\u\p\k\m\4\h\g\w\p\7\0\a\q\q\q\d\s\o\s\m\f\e\m\l\o\4\x\d\j\7\y\z\a\s\8\8\u\6\e\4\v\0\5\7\n\d\x\a\u\0\4\f\b\s\i\q\t\w\k\l\x\r\w\s\e\b\v\u\3\8\g\j\0\3\z\m\5\4\v\1\p\u\w\t\z\j\g\0\g\l\8\4\r\n\n\g\1\c\f\d\0\c\u\c\w\q\g\5\0\5\r\a\h\1\y\v\e\k\7\p\2\9\i\u\s\e\y\v\5\p\6\x\5\9\g\o\n\g\q\n\q\r\o\o\a\x\g\o\p\r\c\t\s\t\3\j\z\j\v\f\m\s\r\t\w\2\9\u\i\2\w\3\i ]] 00:08:18.740 02:52:57 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.740 02:52:57 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:18.740 [2024-04-23 02:52:57.890728] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:18.740 [2024-04-23 02:52:57.890839] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76991 ] 00:08:19.000 [2024-04-23 02:52:58.011294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.000 [2024-04-23 02:52:58.028238] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.000 [2024-04-23 02:52:58.058101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.258  Copying: 512/512 [B] (average 500 kBps) 00:08:19.259 00:08:19.259 02:52:58 -- dd/posix.sh@93 -- # [[ cvvoqgryseky4pfz45ymcmm0tyqiqlatreeru8i1612e5svi8iy335b0k0zvjlrk5ocrud2s1z9ao1kf12r9f60r7fzwntccvt9ztauk3bwwlpukr0ecnp60h5cc46rwq73i0uvy24m8e64be2blw308vpwhbzsaxoh0p31gaazo9i224phqpfxytjvoosci34o2nwcaez2x1egig7d0t1b1cq5hsstu83wh2dnlr8hzimfzcy4eua05413ylhsxxmdwqz0l4k148n4ome0i0o6k53j0kxid7gbrdjxzot7utomzrot574qb7fvt9nukja2s1htw5yzdmblo78wupkm4hgwp70aqqqdsosmfemlo4xdj7yzas88u6e4v057ndxau04fbsiqtwklxrwsebvu38gj03zm54v1puwtzjg0gl84rnng1cfd0cucwqg505rah1yvek7p29iuseyv5p6x59gongqnqrooaxgoprctst3jzjvfmsrtw29ui2w3i == \c\v\v\o\q\g\r\y\s\e\k\y\4\p\f\z\4\5\y\m\c\m\m\0\t\y\q\i\q\l\a\t\r\e\e\r\u\8\i\1\6\1\2\e\5\s\v\i\8\i\y\3\3\5\b\0\k\0\z\v\j\l\r\k\5\o\c\r\u\d\2\s\1\z\9\a\o\1\k\f\1\2\r\9\f\6\0\r\7\f\z\w\n\t\c\c\v\t\9\z\t\a\u\k\3\b\w\w\l\p\u\k\r\0\e\c\n\p\6\0\h\5\c\c\4\6\r\w\q\7\3\i\0\u\v\y\2\4\m\8\e\6\4\b\e\2\b\l\w\3\0\8\v\p\w\h\b\z\s\a\x\o\h\0\p\3\1\g\a\a\z\o\9\i\2\2\4\p\h\q\p\f\x\y\t\j\v\o\o\s\c\i\3\4\o\2\n\w\c\a\e\z\2\x\1\e\g\i\g\7\d\0\t\1\b\1\c\q\5\h\s\s\t\u\8\3\w\h\2\d\n\l\r\8\h\z\i\m\f\z\c\y\4\e\u\a\0\5\4\1\3\y\l\h\s\x\x\m\d\w\q\z\0\l\4\k\1\4\8\n\4\o\m\e\0\i\0\o\6\k\5\3\j\0\k\x\i\d\7\g\b\r\d\j\x\z\o\t\7\u\t\o\m\z\r\o\t\5\7\4\q\b\7\f\v\t\9\n\u\k\j\a\2\s\1\h\t\w\5\y\z\d\m\b\l\o\7\8\w\u\p\k\m\4\h\g\w\p\7\0\a\q\q\q\d\s\o\s\m\f\e\m\l\o\4\x\d\j\7\y\z\a\s\8\8\u\6\e\4\v\0\5\7\n\d\x\a\u\0\4\f\b\s\i\q\t\w\k\l\x\r\w\s\e\b\v\u\3\8\g\j\0\3\z\m\5\4\v\1\p\u\w\t\z\j\g\0\g\l\8\4\r\n\n\g\1\c\f\d\0\c\u\c\w\q\g\5\0\5\r\a\h\1\y\v\e\k\7\p\2\9\i\u\s\e\y\v\5\p\6\x\5\9\g\o\n\g\q\n\q\r\o\o\a\x\g\o\p\r\c\t\s\t\3\j\z\j\v\f\m\s\r\t\w\2\9\u\i\2\w\3\i ]] 00:08:19.259 02:52:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.259 02:52:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:19.259 [2024-04-23 02:52:58.275152] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:19.259 [2024-04-23 02:52:58.275251] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76993 ] 00:08:19.259 [2024-04-23 02:52:58.395388] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.259 [2024-04-23 02:52:58.413459] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.518 [2024-04-23 02:52:58.444371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.518  Copying: 512/512 [B] (average 125 kBps) 00:08:19.518 00:08:19.518 02:52:58 -- dd/posix.sh@93 -- # [[ cvvoqgryseky4pfz45ymcmm0tyqiqlatreeru8i1612e5svi8iy335b0k0zvjlrk5ocrud2s1z9ao1kf12r9f60r7fzwntccvt9ztauk3bwwlpukr0ecnp60h5cc46rwq73i0uvy24m8e64be2blw308vpwhbzsaxoh0p31gaazo9i224phqpfxytjvoosci34o2nwcaez2x1egig7d0t1b1cq5hsstu83wh2dnlr8hzimfzcy4eua05413ylhsxxmdwqz0l4k148n4ome0i0o6k53j0kxid7gbrdjxzot7utomzrot574qb7fvt9nukja2s1htw5yzdmblo78wupkm4hgwp70aqqqdsosmfemlo4xdj7yzas88u6e4v057ndxau04fbsiqtwklxrwsebvu38gj03zm54v1puwtzjg0gl84rnng1cfd0cucwqg505rah1yvek7p29iuseyv5p6x59gongqnqrooaxgoprctst3jzjvfmsrtw29ui2w3i == \c\v\v\o\q\g\r\y\s\e\k\y\4\p\f\z\4\5\y\m\c\m\m\0\t\y\q\i\q\l\a\t\r\e\e\r\u\8\i\1\6\1\2\e\5\s\v\i\8\i\y\3\3\5\b\0\k\0\z\v\j\l\r\k\5\o\c\r\u\d\2\s\1\z\9\a\o\1\k\f\1\2\r\9\f\6\0\r\7\f\z\w\n\t\c\c\v\t\9\z\t\a\u\k\3\b\w\w\l\p\u\k\r\0\e\c\n\p\6\0\h\5\c\c\4\6\r\w\q\7\3\i\0\u\v\y\2\4\m\8\e\6\4\b\e\2\b\l\w\3\0\8\v\p\w\h\b\z\s\a\x\o\h\0\p\3\1\g\a\a\z\o\9\i\2\2\4\p\h\q\p\f\x\y\t\j\v\o\o\s\c\i\3\4\o\2\n\w\c\a\e\z\2\x\1\e\g\i\g\7\d\0\t\1\b\1\c\q\5\h\s\s\t\u\8\3\w\h\2\d\n\l\r\8\h\z\i\m\f\z\c\y\4\e\u\a\0\5\4\1\3\y\l\h\s\x\x\m\d\w\q\z\0\l\4\k\1\4\8\n\4\o\m\e\0\i\0\o\6\k\5\3\j\0\k\x\i\d\7\g\b\r\d\j\x\z\o\t\7\u\t\o\m\z\r\o\t\5\7\4\q\b\7\f\v\t\9\n\u\k\j\a\2\s\1\h\t\w\5\y\z\d\m\b\l\o\7\8\w\u\p\k\m\4\h\g\w\p\7\0\a\q\q\q\d\s\o\s\m\f\e\m\l\o\4\x\d\j\7\y\z\a\s\8\8\u\6\e\4\v\0\5\7\n\d\x\a\u\0\4\f\b\s\i\q\t\w\k\l\x\r\w\s\e\b\v\u\3\8\g\j\0\3\z\m\5\4\v\1\p\u\w\t\z\j\g\0\g\l\8\4\r\n\n\g\1\c\f\d\0\c\u\c\w\q\g\5\0\5\r\a\h\1\y\v\e\k\7\p\2\9\i\u\s\e\y\v\5\p\6\x\5\9\g\o\n\g\q\n\q\r\o\o\a\x\g\o\p\r\c\t\s\t\3\j\z\j\v\f\m\s\r\t\w\2\9\u\i\2\w\3\i ]] 00:08:19.518 02:52:58 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.518 02:52:58 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:19.778 [2024-04-23 02:52:58.686788] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:19.778 [2024-04-23 02:52:58.686909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77005 ] 00:08:19.778 [2024-04-23 02:52:58.808201] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:19.778 [2024-04-23 02:52:58.828091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.778 [2024-04-23 02:52:58.864164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.037  Copying: 512/512 [B] (average 500 kBps) 00:08:20.037 00:08:20.037 02:52:59 -- dd/posix.sh@93 -- # [[ cvvoqgryseky4pfz45ymcmm0tyqiqlatreeru8i1612e5svi8iy335b0k0zvjlrk5ocrud2s1z9ao1kf12r9f60r7fzwntccvt9ztauk3bwwlpukr0ecnp60h5cc46rwq73i0uvy24m8e64be2blw308vpwhbzsaxoh0p31gaazo9i224phqpfxytjvoosci34o2nwcaez2x1egig7d0t1b1cq5hsstu83wh2dnlr8hzimfzcy4eua05413ylhsxxmdwqz0l4k148n4ome0i0o6k53j0kxid7gbrdjxzot7utomzrot574qb7fvt9nukja2s1htw5yzdmblo78wupkm4hgwp70aqqqdsosmfemlo4xdj7yzas88u6e4v057ndxau04fbsiqtwklxrwsebvu38gj03zm54v1puwtzjg0gl84rnng1cfd0cucwqg505rah1yvek7p29iuseyv5p6x59gongqnqrooaxgoprctst3jzjvfmsrtw29ui2w3i == \c\v\v\o\q\g\r\y\s\e\k\y\4\p\f\z\4\5\y\m\c\m\m\0\t\y\q\i\q\l\a\t\r\e\e\r\u\8\i\1\6\1\2\e\5\s\v\i\8\i\y\3\3\5\b\0\k\0\z\v\j\l\r\k\5\o\c\r\u\d\2\s\1\z\9\a\o\1\k\f\1\2\r\9\f\6\0\r\7\f\z\w\n\t\c\c\v\t\9\z\t\a\u\k\3\b\w\w\l\p\u\k\r\0\e\c\n\p\6\0\h\5\c\c\4\6\r\w\q\7\3\i\0\u\v\y\2\4\m\8\e\6\4\b\e\2\b\l\w\3\0\8\v\p\w\h\b\z\s\a\x\o\h\0\p\3\1\g\a\a\z\o\9\i\2\2\4\p\h\q\p\f\x\y\t\j\v\o\o\s\c\i\3\4\o\2\n\w\c\a\e\z\2\x\1\e\g\i\g\7\d\0\t\1\b\1\c\q\5\h\s\s\t\u\8\3\w\h\2\d\n\l\r\8\h\z\i\m\f\z\c\y\4\e\u\a\0\5\4\1\3\y\l\h\s\x\x\m\d\w\q\z\0\l\4\k\1\4\8\n\4\o\m\e\0\i\0\o\6\k\5\3\j\0\k\x\i\d\7\g\b\r\d\j\x\z\o\t\7\u\t\o\m\z\r\o\t\5\7\4\q\b\7\f\v\t\9\n\u\k\j\a\2\s\1\h\t\w\5\y\z\d\m\b\l\o\7\8\w\u\p\k\m\4\h\g\w\p\7\0\a\q\q\q\d\s\o\s\m\f\e\m\l\o\4\x\d\j\7\y\z\a\s\8\8\u\6\e\4\v\0\5\7\n\d\x\a\u\0\4\f\b\s\i\q\t\w\k\l\x\r\w\s\e\b\v\u\3\8\g\j\0\3\z\m\5\4\v\1\p\u\w\t\z\j\g\0\g\l\8\4\r\n\n\g\1\c\f\d\0\c\u\c\w\q\g\5\0\5\r\a\h\1\y\v\e\k\7\p\2\9\i\u\s\e\y\v\5\p\6\x\5\9\g\o\n\g\q\n\q\r\o\o\a\x\g\o\p\r\c\t\s\t\3\j\z\j\v\f\m\s\r\t\w\2\9\u\i\2\w\3\i ]] 00:08:20.037 00:08:20.037 real 0m3.184s 00:08:20.037 user 0m1.524s 00:08:20.037 sys 0m0.686s 00:08:20.037 02:52:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:20.037 ************************************ 00:08:20.037 END TEST dd_flags_misc_forced_aio 00:08:20.037 ************************************ 00:08:20.037 02:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.037 02:52:59 -- dd/posix.sh@1 -- # cleanup 00:08:20.037 02:52:59 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:20.037 02:52:59 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:20.037 ************************************ 00:08:20.037 END TEST spdk_dd_posix 00:08:20.037 ************************************ 00:08:20.037 00:08:20.037 real 0m15.575s 00:08:20.037 user 0m6.377s 00:08:20.037 sys 0m4.322s 00:08:20.037 02:52:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:20.037 02:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.037 02:52:59 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:20.037 02:52:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.037 02:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.037 02:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.297 ************************************ 00:08:20.297 START TEST spdk_dd_malloc 00:08:20.297 ************************************ 00:08:20.297 02:52:59 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:20.297 * Looking for test storage... 00:08:20.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:20.297 02:52:59 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:20.297 02:52:59 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.297 02:52:59 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.297 02:52:59 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.297 02:52:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.297 02:52:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.297 02:52:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.297 02:52:59 -- paths/export.sh@5 -- # export PATH 00:08:20.297 02:52:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.297 02:52:59 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:20.297 02:52:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:20.297 02:52:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.297 02:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.297 ************************************ 00:08:20.297 START TEST dd_malloc_copy 00:08:20.297 ************************************ 00:08:20.297 02:52:59 -- common/autotest_common.sh@1111 -- # malloc_copy 00:08:20.297 02:52:59 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:20.297 02:52:59 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:20.297 02:52:59 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:20.297 02:52:59 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:20.297 02:52:59 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:20.297 02:52:59 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:20.297 02:52:59 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:20.297 02:52:59 -- dd/malloc.sh@28 -- # gen_conf 00:08:20.297 02:52:59 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.297 02:52:59 -- common/autotest_common.sh@10 -- # set +x 00:08:20.297 [2024-04-23 02:52:59.405593] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:20.297 [2024-04-23 02:52:59.405946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77079 ] 00:08:20.297 { 00:08:20.297 "subsystems": [ 00:08:20.297 { 00:08:20.297 "subsystem": "bdev", 00:08:20.297 "config": [ 00:08:20.297 { 00:08:20.297 "params": { 00:08:20.297 "block_size": 512, 00:08:20.297 "num_blocks": 1048576, 00:08:20.297 "name": "malloc0" 00:08:20.297 }, 00:08:20.297 "method": "bdev_malloc_create" 00:08:20.297 }, 00:08:20.297 { 00:08:20.297 "params": { 00:08:20.298 "block_size": 512, 00:08:20.298 "num_blocks": 1048576, 00:08:20.298 "name": "malloc1" 00:08:20.298 }, 00:08:20.298 "method": "bdev_malloc_create" 00:08:20.298 }, 00:08:20.298 { 00:08:20.298 "method": "bdev_wait_for_examine" 00:08:20.298 } 00:08:20.298 ] 00:08:20.298 } 00:08:20.298 ] 00:08:20.298 } 00:08:20.556 [2024-04-23 02:52:59.520551] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:20.556 [2024-04-23 02:52:59.537298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.556 [2024-04-23 02:52:59.567725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.127  Copying: 240/512 [MB] (240 MBps) Copying: 486/512 [MB] (246 MBps) Copying: 512/512 [MB] (average 241 MBps) 00:08:23.127 00:08:23.127 02:53:02 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:23.127 02:53:02 -- dd/malloc.sh@33 -- # gen_conf 00:08:23.127 02:53:02 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.127 02:53:02 -- common/autotest_common.sh@10 -- # set +x 00:08:23.127 [2024-04-23 02:53:02.260276] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:23.127 [2024-04-23 02:53:02.260371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77121 ] 00:08:23.386 { 00:08:23.386 "subsystems": [ 00:08:23.386 { 00:08:23.386 "subsystem": "bdev", 00:08:23.386 "config": [ 00:08:23.386 { 00:08:23.386 "params": { 00:08:23.386 "block_size": 512, 00:08:23.386 "num_blocks": 1048576, 00:08:23.386 "name": "malloc0" 00:08:23.386 }, 00:08:23.386 "method": "bdev_malloc_create" 00:08:23.386 }, 00:08:23.386 { 00:08:23.386 "params": { 00:08:23.386 "block_size": 512, 00:08:23.386 "num_blocks": 1048576, 00:08:23.386 "name": "malloc1" 00:08:23.386 }, 00:08:23.386 "method": "bdev_malloc_create" 00:08:23.386 }, 00:08:23.386 { 00:08:23.386 "method": "bdev_wait_for_examine" 00:08:23.386 } 00:08:23.386 ] 00:08:23.386 } 00:08:23.386 ] 00:08:23.386 } 00:08:23.386 [2024-04-23 02:53:02.374428] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:23.386 [2024-04-23 02:53:02.388397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.386 [2024-04-23 02:53:02.421350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.266  Copying: 241/512 [MB] (241 MBps) Copying: 467/512 [MB] (226 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:08:26.266 00:08:26.266 00:08:26.266 real 0m5.791s 00:08:26.266 user 0m5.190s 00:08:26.266 sys 0m0.446s 00:08:26.266 02:53:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:26.266 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.266 ************************************ 00:08:26.266 END TEST dd_malloc_copy 00:08:26.266 ************************************ 00:08:26.266 00:08:26.266 real 0m6.005s 00:08:26.266 user 0m5.264s 00:08:26.266 sys 0m0.573s 00:08:26.266 02:53:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:26.266 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.266 ************************************ 00:08:26.266 END TEST spdk_dd_malloc 00:08:26.266 ************************************ 00:08:26.266 02:53:05 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:26.266 02:53:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:26.266 02:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.266 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.266 ************************************ 00:08:26.266 START TEST spdk_dd_bdev_to_bdev 00:08:26.266 ************************************ 00:08:26.267 02:53:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:26.267 * Looking for test storage... 00:08:26.267 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:26.267 02:53:05 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.267 02:53:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.267 02:53:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.267 02:53:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.267 02:53:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.267 02:53:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.267 02:53:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.267 02:53:05 -- paths/export.sh@5 -- # export PATH 00:08:26.267 02:53:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:26.267 02:53:05 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:26.267 02:53:05 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:26.267 02:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.267 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.527 ************************************ 00:08:26.527 START TEST dd_inflate_file 00:08:26.527 ************************************ 00:08:26.527 02:53:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:26.527 [2024-04-23 02:53:05.521925] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:26.527 [2024-04-23 02:53:05.522021] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77231 ] 00:08:26.527 [2024-04-23 02:53:05.643545] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:26.527 [2024-04-23 02:53:05.664357] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.786 [2024-04-23 02:53:05.702795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.786  Copying: 64/64 [MB] (average 1641 MBps) 00:08:26.786 00:08:26.786 00:08:26.786 real 0m0.433s 00:08:26.786 user 0m0.242s 00:08:26.786 sys 0m0.204s 00:08:26.786 02:53:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:26.786 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:26.786 ************************************ 00:08:26.786 END TEST dd_inflate_file 00:08:26.786 ************************************ 00:08:27.046 02:53:05 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:27.046 02:53:05 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:27.046 02:53:05 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:27.046 02:53:05 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:27.046 02:53:05 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:27.046 02:53:05 -- dd/common.sh@31 -- # xtrace_disable 00:08:27.046 02:53:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.046 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.046 02:53:05 -- common/autotest_common.sh@10 -- # set +x 00:08:27.046 { 00:08:27.046 "subsystems": [ 00:08:27.046 { 00:08:27.046 "subsystem": "bdev", 00:08:27.046 "config": [ 00:08:27.046 { 00:08:27.046 "params": { 00:08:27.046 "trtype": "pcie", 00:08:27.046 "traddr": "0000:00:10.0", 00:08:27.046 "name": "Nvme0" 00:08:27.046 }, 00:08:27.046 "method": "bdev_nvme_attach_controller" 00:08:27.046 }, 00:08:27.046 { 00:08:27.046 "params": { 00:08:27.046 "trtype": "pcie", 00:08:27.046 "traddr": "0000:00:11.0", 00:08:27.046 "name": "Nvme1" 00:08:27.046 }, 00:08:27.046 "method": "bdev_nvme_attach_controller" 00:08:27.046 }, 00:08:27.046 { 00:08:27.046 "method": "bdev_wait_for_examine" 00:08:27.046 } 00:08:27.046 ] 00:08:27.046 } 00:08:27.046 ] 00:08:27.046 } 00:08:27.046 ************************************ 00:08:27.046 START TEST dd_copy_to_out_bdev 00:08:27.046 ************************************ 00:08:27.046 02:53:06 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:27.046 [2024-04-23 02:53:06.070316] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:27.046 [2024-04-23 02:53:06.070399] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77275 ] 00:08:27.046 [2024-04-23 02:53:06.190521] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:27.305 [2024-04-23 02:53:06.210197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.305 [2024-04-23 02:53:06.241406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.683  Copying: 53/64 [MB] (53 MBps) Copying: 64/64 [MB] (average 53 MBps) 00:08:28.683 00:08:28.683 00:08:28.683 real 0m1.747s 00:08:28.683 user 0m1.520s 00:08:28.683 sys 0m1.418s 00:08:28.683 02:53:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:28.683 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:08:28.683 ************************************ 00:08:28.683 END TEST dd_copy_to_out_bdev 00:08:28.683 ************************************ 00:08:28.683 02:53:07 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:28.683 02:53:07 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:28.683 02:53:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:28.683 02:53:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.683 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:08:28.943 ************************************ 00:08:28.943 START TEST dd_offset_magic 00:08:28.943 ************************************ 00:08:28.943 02:53:07 -- common/autotest_common.sh@1111 -- # offset_magic 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:28.943 02:53:07 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:28.943 02:53:07 -- dd/common.sh@31 -- # xtrace_disable 00:08:28.943 02:53:07 -- common/autotest_common.sh@10 -- # set +x 00:08:28.943 [2024-04-23 02:53:07.949557] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:28.943 [2024-04-23 02:53:07.949674] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77318 ] 00:08:28.943 { 00:08:28.943 "subsystems": [ 00:08:28.943 { 00:08:28.943 "subsystem": "bdev", 00:08:28.943 "config": [ 00:08:28.943 { 00:08:28.943 "params": { 00:08:28.943 "trtype": "pcie", 00:08:28.943 "traddr": "0000:00:10.0", 00:08:28.943 "name": "Nvme0" 00:08:28.943 }, 00:08:28.943 "method": "bdev_nvme_attach_controller" 00:08:28.943 }, 00:08:28.943 { 00:08:28.943 "params": { 00:08:28.943 "trtype": "pcie", 00:08:28.943 "traddr": "0000:00:11.0", 00:08:28.943 "name": "Nvme1" 00:08:28.943 }, 00:08:28.943 "method": "bdev_nvme_attach_controller" 00:08:28.943 }, 00:08:28.943 { 00:08:28.943 "method": "bdev_wait_for_examine" 00:08:28.943 } 00:08:28.943 ] 00:08:28.943 } 00:08:28.943 ] 00:08:28.943 } 00:08:28.943 [2024-04-23 02:53:08.070060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:28.943 [2024-04-23 02:53:08.089254] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.202 [2024-04-23 02:53:08.122073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.461  Copying: 65/65 [MB] (average 902 MBps) 00:08:29.461 00:08:29.461 02:53:08 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:29.461 02:53:08 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:29.461 02:53:08 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.461 02:53:08 -- common/autotest_common.sh@10 -- # set +x 00:08:29.461 [2024-04-23 02:53:08.560354] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:29.461 [2024-04-23 02:53:08.560433] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77332 ] 00:08:29.461 { 00:08:29.461 "subsystems": [ 00:08:29.461 { 00:08:29.461 "subsystem": "bdev", 00:08:29.461 "config": [ 00:08:29.461 { 00:08:29.461 "params": { 00:08:29.461 "trtype": "pcie", 00:08:29.461 "traddr": "0000:00:10.0", 00:08:29.461 "name": "Nvme0" 00:08:29.461 }, 00:08:29.461 "method": "bdev_nvme_attach_controller" 00:08:29.461 }, 00:08:29.461 { 00:08:29.461 "params": { 00:08:29.461 "trtype": "pcie", 00:08:29.461 "traddr": "0000:00:11.0", 00:08:29.461 "name": "Nvme1" 00:08:29.461 }, 00:08:29.461 "method": "bdev_nvme_attach_controller" 00:08:29.461 }, 00:08:29.461 { 00:08:29.461 "method": "bdev_wait_for_examine" 00:08:29.461 } 00:08:29.461 ] 00:08:29.461 } 00:08:29.461 ] 00:08:29.461 } 00:08:29.721 [2024-04-23 02:53:08.674833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:29.721 [2024-04-23 02:53:08.688850] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.721 [2024-04-23 02:53:08.720826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.979  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:29.979 00:08:29.979 02:53:09 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:29.979 02:53:09 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:29.979 02:53:09 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:29.980 02:53:09 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:29.980 02:53:09 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:29.980 02:53:09 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.980 02:53:09 -- common/autotest_common.sh@10 -- # set +x 00:08:29.980 [2024-04-23 02:53:09.065913] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:29.980 [2024-04-23 02:53:09.066008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77349 ] 00:08:29.980 { 00:08:29.980 "subsystems": [ 00:08:29.980 { 00:08:29.980 "subsystem": "bdev", 00:08:29.980 "config": [ 00:08:29.980 { 00:08:29.980 "params": { 00:08:29.980 "trtype": "pcie", 00:08:29.980 "traddr": "0000:00:10.0", 00:08:29.980 "name": "Nvme0" 00:08:29.980 }, 00:08:29.980 "method": "bdev_nvme_attach_controller" 00:08:29.980 }, 00:08:29.980 { 00:08:29.980 "params": { 00:08:29.980 "trtype": "pcie", 00:08:29.980 "traddr": "0000:00:11.0", 00:08:29.980 "name": "Nvme1" 00:08:29.980 }, 00:08:29.980 "method": "bdev_nvme_attach_controller" 00:08:29.980 }, 00:08:29.980 { 00:08:29.980 "method": "bdev_wait_for_examine" 00:08:29.980 } 00:08:29.980 ] 00:08:29.980 } 00:08:29.980 ] 00:08:29.980 } 00:08:30.239 [2024-04-23 02:53:09.185861] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.239 [2024-04-23 02:53:09.199369] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.239 [2024-04-23 02:53:09.231811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.500  Copying: 65/65 [MB] (average 1031 MBps) 00:08:30.500 00:08:30.500 02:53:09 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:30.500 02:53:09 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:30.500 02:53:09 -- dd/common.sh@31 -- # xtrace_disable 00:08:30.500 02:53:09 -- common/autotest_common.sh@10 -- # set +x 00:08:30.759 [2024-04-23 02:53:09.676880] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:30.759 [2024-04-23 02:53:09.676967] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77363 ] 00:08:30.759 { 00:08:30.759 "subsystems": [ 00:08:30.759 { 00:08:30.759 "subsystem": "bdev", 00:08:30.759 "config": [ 00:08:30.759 { 00:08:30.759 "params": { 00:08:30.759 "trtype": "pcie", 00:08:30.759 "traddr": "0000:00:10.0", 00:08:30.759 "name": "Nvme0" 00:08:30.759 }, 00:08:30.759 "method": "bdev_nvme_attach_controller" 00:08:30.759 }, 00:08:30.759 { 00:08:30.759 "params": { 00:08:30.759 "trtype": "pcie", 00:08:30.759 "traddr": "0000:00:11.0", 00:08:30.759 "name": "Nvme1" 00:08:30.759 }, 00:08:30.759 "method": "bdev_nvme_attach_controller" 00:08:30.759 }, 00:08:30.759 { 00:08:30.759 "method": "bdev_wait_for_examine" 00:08:30.759 } 00:08:30.759 ] 00:08:30.759 } 00:08:30.759 ] 00:08:30.759 } 00:08:30.759 [2024-04-23 02:53:09.791633] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:30.759 [2024-04-23 02:53:09.805442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.759 [2024-04-23 02:53:09.839389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.017  Copying: 1024/1024 [kB] (average 500 MBps) 00:08:31.017 00:08:31.017 02:53:10 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:31.017 02:53:10 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:31.017 00:08:31.017 real 0m2.256s 00:08:31.017 user 0m1.663s 00:08:31.017 sys 0m0.582s 00:08:31.017 02:53:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:31.017 ************************************ 00:08:31.017 END TEST dd_offset_magic 00:08:31.017 ************************************ 00:08:31.017 02:53:10 -- common/autotest_common.sh@10 -- # set +x 00:08:31.276 02:53:10 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:31.276 02:53:10 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:31.276 02:53:10 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:31.276 02:53:10 -- dd/common.sh@11 -- # local nvme_ref= 00:08:31.276 02:53:10 -- dd/common.sh@12 -- # local size=4194330 00:08:31.276 02:53:10 -- dd/common.sh@14 -- # local bs=1048576 00:08:31.276 02:53:10 -- dd/common.sh@15 -- # local count=5 00:08:31.276 02:53:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:31.276 02:53:10 -- dd/common.sh@18 -- # gen_conf 00:08:31.276 02:53:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.276 02:53:10 -- common/autotest_common.sh@10 -- # set +x 00:08:31.276 [2024-04-23 02:53:10.234928] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:31.276 [2024-04-23 02:53:10.235004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77395 ] 00:08:31.276 { 00:08:31.276 "subsystems": [ 00:08:31.276 { 00:08:31.276 "subsystem": "bdev", 00:08:31.276 "config": [ 00:08:31.276 { 00:08:31.276 "params": { 00:08:31.276 "trtype": "pcie", 00:08:31.276 "traddr": "0000:00:10.0", 00:08:31.276 "name": "Nvme0" 00:08:31.276 }, 00:08:31.276 "method": "bdev_nvme_attach_controller" 00:08:31.276 }, 00:08:31.276 { 00:08:31.276 "params": { 00:08:31.276 "trtype": "pcie", 00:08:31.276 "traddr": "0000:00:11.0", 00:08:31.276 "name": "Nvme1" 00:08:31.276 }, 00:08:31.276 "method": "bdev_nvme_attach_controller" 00:08:31.276 }, 00:08:31.276 { 00:08:31.276 "method": "bdev_wait_for_examine" 00:08:31.276 } 00:08:31.276 ] 00:08:31.276 } 00:08:31.276 ] 00:08:31.276 } 00:08:31.276 [2024-04-23 02:53:10.353118] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.276 [2024-04-23 02:53:10.371019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.276 [2024-04-23 02:53:10.409772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.794  Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:31.794 00:08:31.794 02:53:10 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:31.794 02:53:10 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:31.794 02:53:10 -- dd/common.sh@11 -- # local nvme_ref= 00:08:31.794 02:53:10 -- dd/common.sh@12 -- # local size=4194330 00:08:31.794 02:53:10 -- dd/common.sh@14 -- # local bs=1048576 00:08:31.794 02:53:10 -- dd/common.sh@15 -- # local count=5 00:08:31.794 02:53:10 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:31.794 02:53:10 -- dd/common.sh@18 -- # gen_conf 00:08:31.794 02:53:10 -- dd/common.sh@31 -- # xtrace_disable 00:08:31.794 02:53:10 -- common/autotest_common.sh@10 -- # set +x 00:08:31.794 [2024-04-23 02:53:10.772940] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:31.794 [2024-04-23 02:53:10.773033] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77410 ] 00:08:31.794 { 00:08:31.794 "subsystems": [ 00:08:31.794 { 00:08:31.794 "subsystem": "bdev", 00:08:31.794 "config": [ 00:08:31.794 { 00:08:31.794 "params": { 00:08:31.794 "trtype": "pcie", 00:08:31.794 "traddr": "0000:00:10.0", 00:08:31.794 "name": "Nvme0" 00:08:31.794 }, 00:08:31.794 "method": "bdev_nvme_attach_controller" 00:08:31.794 }, 00:08:31.794 { 00:08:31.794 "params": { 00:08:31.794 "trtype": "pcie", 00:08:31.795 "traddr": "0000:00:11.0", 00:08:31.795 "name": "Nvme1" 00:08:31.795 }, 00:08:31.795 "method": "bdev_nvme_attach_controller" 00:08:31.795 }, 00:08:31.795 { 00:08:31.795 "method": "bdev_wait_for_examine" 00:08:31.795 } 00:08:31.795 ] 00:08:31.795 } 00:08:31.795 ] 00:08:31.795 } 00:08:31.795 [2024-04-23 02:53:10.893995] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:31.795 [2024-04-23 02:53:10.913256] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.795 [2024-04-23 02:53:10.943978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.315  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:32.315 00:08:32.315 02:53:11 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:32.315 ************************************ 00:08:32.315 END TEST spdk_dd_bdev_to_bdev 00:08:32.315 ************************************ 00:08:32.315 00:08:32.315 real 0m5.946s 00:08:32.315 user 0m4.401s 00:08:32.315 sys 0m2.772s 00:08:32.315 02:53:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:32.315 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 02:53:11 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:32.315 02:53:11 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:32.315 02:53:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.315 02:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.315 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.315 ************************************ 00:08:32.315 START TEST spdk_dd_uring 00:08:32.315 ************************************ 00:08:32.315 02:53:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:32.315 * Looking for test storage... 00:08:32.315 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:32.315 02:53:11 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.315 02:53:11 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.315 02:53:11 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.315 02:53:11 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.315 02:53:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.315 02:53:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.315 02:53:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.315 02:53:11 -- paths/export.sh@5 -- # export PATH 00:08:32.316 02:53:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.316 02:53:11 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:32.316 02:53:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:32.316 02:53:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.316 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.574 ************************************ 00:08:32.574 START TEST dd_uring_copy 00:08:32.574 ************************************ 00:08:32.574 02:53:11 -- common/autotest_common.sh@1111 -- # uring_zram_copy 00:08:32.574 02:53:11 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:32.574 02:53:11 -- dd/uring.sh@16 -- # local magic 00:08:32.574 02:53:11 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:32.574 02:53:11 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:32.574 02:53:11 -- dd/uring.sh@19 -- # local verify_magic 00:08:32.574 02:53:11 -- dd/uring.sh@21 -- # init_zram 00:08:32.574 02:53:11 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:32.574 02:53:11 -- dd/common.sh@164 -- # return 00:08:32.574 02:53:11 -- dd/uring.sh@22 -- # create_zram_dev 00:08:32.574 02:53:11 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:32.574 02:53:11 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:32.574 02:53:11 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:32.574 02:53:11 -- dd/common.sh@181 -- # local id=1 00:08:32.574 02:53:11 -- dd/common.sh@182 -- # local size=512M 00:08:32.574 02:53:11 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:32.574 02:53:11 -- dd/common.sh@186 -- # echo 512M 00:08:32.574 02:53:11 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:32.574 02:53:11 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:32.574 02:53:11 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:32.574 02:53:11 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:32.574 02:53:11 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:32.574 02:53:11 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:32.574 02:53:11 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:32.574 02:53:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:32.574 02:53:11 -- common/autotest_common.sh@10 -- # set +x 00:08:32.574 02:53:11 -- dd/uring.sh@41 -- # magic=hf47mky2douy5wbmdfr7nzr6tsu5d985od46u4qdtepikrmkmss84ded5hyt6az3sf5lzhy2jeshyfxxdrlt5lopgm2u5r5u4wtzq5w9px45irnz53lsxmhmdypgmfp796huhgwhrjb9aiarqyv5eo5kde0jpqoqchym8l17usjzjy83wvskwze80dznipnofo0fb7qvpjh7xzrtrl2bcm8tssk71f0q94qlg7fosp6a0h675plh6ujwcgwyzy4m4nt37qdjj5tj1gvg960v0y1jxes3ct2m5qumwetbbkoqph5tddkfdd1mbl0b8x4zc7dt58w09ncgb1zxr0blv5eqo64xupx0tx5va0k0fd7we09sc1w79sgwm6i35iwwj4cnv5s9eiu3bdu49to6phmpgym400qt9up12w0pj51j3391z4ycdclv1abymx9yrlawuwbaxhe0fda7m0dkcja7mju0cib2r6isam72mx73blmvgobbyq4ufwvxbyrg8d3m06ud8a3y3tktqjsnws0z9qaextur63skzpqvoo7yvhmddvdalttgeyqowiw21uexl5jnecm8mkefnzo55sksg9p7lizju24imlld73j0bj7hxizdx6avib9burbnglpy8dvxv88jd28pt7abyxh2557m0xv318phkjfzyjcm3jcn8d0s6iz7fynzt5zvrrq57u7bq6rl67qw9c2t5ueipjxssbvfi5w8ltxacktmmwmelyt2t5zbcqq7on4pau4qun3dgqo4o7vfyjqdqsefwpr8n9yc3q0gtmerudrjrcgr1qegvnnocp5hlyj76ujw60kvkf48wogj1l775tsqi6lghyiu4j23pd1o0buxqvcigamvi21t19lxrj7eryp2zepombejdbr2ckhrbbp5mb7d9xmdis1kjftrvmy5sem4hmxajodtn2uq843h6033n6fs5mdnw29xm2x6uyglp9utfoxlo1prv4m73onz0kanmfo0jnueuxqzpgty 00:08:32.574 02:53:11 -- dd/uring.sh@42 -- # echo hf47mky2douy5wbmdfr7nzr6tsu5d985od46u4qdtepikrmkmss84ded5hyt6az3sf5lzhy2jeshyfxxdrlt5lopgm2u5r5u4wtzq5w9px45irnz53lsxmhmdypgmfp796huhgwhrjb9aiarqyv5eo5kde0jpqoqchym8l17usjzjy83wvskwze80dznipnofo0fb7qvpjh7xzrtrl2bcm8tssk71f0q94qlg7fosp6a0h675plh6ujwcgwyzy4m4nt37qdjj5tj1gvg960v0y1jxes3ct2m5qumwetbbkoqph5tddkfdd1mbl0b8x4zc7dt58w09ncgb1zxr0blv5eqo64xupx0tx5va0k0fd7we09sc1w79sgwm6i35iwwj4cnv5s9eiu3bdu49to6phmpgym400qt9up12w0pj51j3391z4ycdclv1abymx9yrlawuwbaxhe0fda7m0dkcja7mju0cib2r6isam72mx73blmvgobbyq4ufwvxbyrg8d3m06ud8a3y3tktqjsnws0z9qaextur63skzpqvoo7yvhmddvdalttgeyqowiw21uexl5jnecm8mkefnzo55sksg9p7lizju24imlld73j0bj7hxizdx6avib9burbnglpy8dvxv88jd28pt7abyxh2557m0xv318phkjfzyjcm3jcn8d0s6iz7fynzt5zvrrq57u7bq6rl67qw9c2t5ueipjxssbvfi5w8ltxacktmmwmelyt2t5zbcqq7on4pau4qun3dgqo4o7vfyjqdqsefwpr8n9yc3q0gtmerudrjrcgr1qegvnnocp5hlyj76ujw60kvkf48wogj1l775tsqi6lghyiu4j23pd1o0buxqvcigamvi21t19lxrj7eryp2zepombejdbr2ckhrbbp5mb7d9xmdis1kjftrvmy5sem4hmxajodtn2uq843h6033n6fs5mdnw29xm2x6uyglp9utfoxlo1prv4m73onz0kanmfo0jnueuxqzpgty 00:08:32.574 02:53:11 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:32.574 [2024-04-23 02:53:11.639865] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:32.574 [2024-04-23 02:53:11.639971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77489 ] 00:08:32.832 [2024-04-23 02:53:11.761449] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:32.832 [2024-04-23 02:53:11.780791] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.832 [2024-04-23 02:53:11.816000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.400  Copying: 511/511 [MB] (average 1438 MBps) 00:08:33.400 00:08:33.400 02:53:12 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:33.400 02:53:12 -- dd/uring.sh@54 -- # gen_conf 00:08:33.400 02:53:12 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.400 02:53:12 -- common/autotest_common.sh@10 -- # set +x 00:08:33.659 [2024-04-23 02:53:12.593788] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:33.659 [2024-04-23 02:53:12.593872] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77505 ] 00:08:33.659 { 00:08:33.659 "subsystems": [ 00:08:33.659 { 00:08:33.659 "subsystem": "bdev", 00:08:33.659 "config": [ 00:08:33.659 { 00:08:33.659 "params": { 00:08:33.659 "block_size": 512, 00:08:33.659 "num_blocks": 1048576, 00:08:33.659 "name": "malloc0" 00:08:33.659 }, 00:08:33.659 "method": "bdev_malloc_create" 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "params": { 00:08:33.659 "filename": "/dev/zram1", 00:08:33.659 "name": "uring0" 00:08:33.659 }, 00:08:33.659 "method": "bdev_uring_create" 00:08:33.659 }, 00:08:33.659 { 00:08:33.659 "method": "bdev_wait_for_examine" 00:08:33.659 } 00:08:33.659 ] 00:08:33.659 } 00:08:33.659 ] 00:08:33.659 } 00:08:33.659 [2024-04-23 02:53:12.708943] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:33.659 [2024-04-23 02:53:12.726235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.659 [2024-04-23 02:53:12.757271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.239  Copying: 231/512 [MB] (231 MBps) Copying: 449/512 [MB] (217 MBps) Copying: 512/512 [MB] (average 224 MBps) 00:08:36.239 00:08:36.239 02:53:15 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:36.239 02:53:15 -- dd/uring.sh@60 -- # gen_conf 00:08:36.239 02:53:15 -- dd/common.sh@31 -- # xtrace_disable 00:08:36.239 02:53:15 -- common/autotest_common.sh@10 -- # set +x 00:08:36.498 [2024-04-23 02:53:15.431419] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:36.498 [2024-04-23 02:53:15.431527] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77544 ] 00:08:36.498 { 00:08:36.498 "subsystems": [ 00:08:36.498 { 00:08:36.498 "subsystem": "bdev", 00:08:36.498 "config": [ 00:08:36.498 { 00:08:36.498 "params": { 00:08:36.498 "block_size": 512, 00:08:36.498 "num_blocks": 1048576, 00:08:36.498 "name": "malloc0" 00:08:36.498 }, 00:08:36.498 "method": "bdev_malloc_create" 00:08:36.498 }, 00:08:36.498 { 00:08:36.498 "params": { 00:08:36.498 "filename": "/dev/zram1", 00:08:36.498 "name": "uring0" 00:08:36.498 }, 00:08:36.498 "method": "bdev_uring_create" 00:08:36.498 }, 00:08:36.498 { 00:08:36.498 "method": "bdev_wait_for_examine" 00:08:36.498 } 00:08:36.498 ] 00:08:36.498 } 00:08:36.498 ] 00:08:36.498 } 00:08:36.498 [2024-04-23 02:53:15.551826] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:36.498 [2024-04-23 02:53:15.568841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.498 [2024-04-23 02:53:15.604893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.767  Copying: 183/512 [MB] (183 MBps) Copying: 353/512 [MB] (170 MBps) Copying: 512/512 [MB] (average 177 MBps) 00:08:39.767 00:08:39.767 02:53:18 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:39.767 02:53:18 -- dd/uring.sh@66 -- # [[ hf47mky2douy5wbmdfr7nzr6tsu5d985od46u4qdtepikrmkmss84ded5hyt6az3sf5lzhy2jeshyfxxdrlt5lopgm2u5r5u4wtzq5w9px45irnz53lsxmhmdypgmfp796huhgwhrjb9aiarqyv5eo5kde0jpqoqchym8l17usjzjy83wvskwze80dznipnofo0fb7qvpjh7xzrtrl2bcm8tssk71f0q94qlg7fosp6a0h675plh6ujwcgwyzy4m4nt37qdjj5tj1gvg960v0y1jxes3ct2m5qumwetbbkoqph5tddkfdd1mbl0b8x4zc7dt58w09ncgb1zxr0blv5eqo64xupx0tx5va0k0fd7we09sc1w79sgwm6i35iwwj4cnv5s9eiu3bdu49to6phmpgym400qt9up12w0pj51j3391z4ycdclv1abymx9yrlawuwbaxhe0fda7m0dkcja7mju0cib2r6isam72mx73blmvgobbyq4ufwvxbyrg8d3m06ud8a3y3tktqjsnws0z9qaextur63skzpqvoo7yvhmddvdalttgeyqowiw21uexl5jnecm8mkefnzo55sksg9p7lizju24imlld73j0bj7hxizdx6avib9burbnglpy8dvxv88jd28pt7abyxh2557m0xv318phkjfzyjcm3jcn8d0s6iz7fynzt5zvrrq57u7bq6rl67qw9c2t5ueipjxssbvfi5w8ltxacktmmwmelyt2t5zbcqq7on4pau4qun3dgqo4o7vfyjqdqsefwpr8n9yc3q0gtmerudrjrcgr1qegvnnocp5hlyj76ujw60kvkf48wogj1l775tsqi6lghyiu4j23pd1o0buxqvcigamvi21t19lxrj7eryp2zepombejdbr2ckhrbbp5mb7d9xmdis1kjftrvmy5sem4hmxajodtn2uq843h6033n6fs5mdnw29xm2x6uyglp9utfoxlo1prv4m73onz0kanmfo0jnueuxqzpgty == \h\f\4\7\m\k\y\2\d\o\u\y\5\w\b\m\d\f\r\7\n\z\r\6\t\s\u\5\d\9\8\5\o\d\4\6\u\4\q\d\t\e\p\i\k\r\m\k\m\s\s\8\4\d\e\d\5\h\y\t\6\a\z\3\s\f\5\l\z\h\y\2\j\e\s\h\y\f\x\x\d\r\l\t\5\l\o\p\g\m\2\u\5\r\5\u\4\w\t\z\q\5\w\9\p\x\4\5\i\r\n\z\5\3\l\s\x\m\h\m\d\y\p\g\m\f\p\7\9\6\h\u\h\g\w\h\r\j\b\9\a\i\a\r\q\y\v\5\e\o\5\k\d\e\0\j\p\q\o\q\c\h\y\m\8\l\1\7\u\s\j\z\j\y\8\3\w\v\s\k\w\z\e\8\0\d\z\n\i\p\n\o\f\o\0\f\b\7\q\v\p\j\h\7\x\z\r\t\r\l\2\b\c\m\8\t\s\s\k\7\1\f\0\q\9\4\q\l\g\7\f\o\s\p\6\a\0\h\6\7\5\p\l\h\6\u\j\w\c\g\w\y\z\y\4\m\4\n\t\3\7\q\d\j\j\5\t\j\1\g\v\g\9\6\0\v\0\y\1\j\x\e\s\3\c\t\2\m\5\q\u\m\w\e\t\b\b\k\o\q\p\h\5\t\d\d\k\f\d\d\1\m\b\l\0\b\8\x\4\z\c\7\d\t\5\8\w\0\9\n\c\g\b\1\z\x\r\0\b\l\v\5\e\q\o\6\4\x\u\p\x\0\t\x\5\v\a\0\k\0\f\d\7\w\e\0\9\s\c\1\w\7\9\s\g\w\m\6\i\3\5\i\w\w\j\4\c\n\v\5\s\9\e\i\u\3\b\d\u\4\9\t\o\6\p\h\m\p\g\y\m\4\0\0\q\t\9\u\p\1\2\w\0\p\j\5\1\j\3\3\9\1\z\4\y\c\d\c\l\v\1\a\b\y\m\x\9\y\r\l\a\w\u\w\b\a\x\h\e\0\f\d\a\7\m\0\d\k\c\j\a\7\m\j\u\0\c\i\b\2\r\6\i\s\a\m\7\2\m\x\7\3\b\l\m\v\g\o\b\b\y\q\4\u\f\w\v\x\b\y\r\g\8\d\3\m\0\6\u\d\8\a\3\y\3\t\k\t\q\j\s\n\w\s\0\z\9\q\a\e\x\t\u\r\6\3\s\k\z\p\q\v\o\o\7\y\v\h\m\d\d\v\d\a\l\t\t\g\e\y\q\o\w\i\w\2\1\u\e\x\l\5\j\n\e\c\m\8\m\k\e\f\n\z\o\5\5\s\k\s\g\9\p\7\l\i\z\j\u\2\4\i\m\l\l\d\7\3\j\0\b\j\7\h\x\i\z\d\x\6\a\v\i\b\9\b\u\r\b\n\g\l\p\y\8\d\v\x\v\8\8\j\d\2\8\p\t\7\a\b\y\x\h\2\5\5\7\m\0\x\v\3\1\8\p\h\k\j\f\z\y\j\c\m\3\j\c\n\8\d\0\s\6\i\z\7\f\y\n\z\t\5\z\v\r\r\q\5\7\u\7\b\q\6\r\l\6\7\q\w\9\c\2\t\5\u\e\i\p\j\x\s\s\b\v\f\i\5\w\8\l\t\x\a\c\k\t\m\m\w\m\e\l\y\t\2\t\5\z\b\c\q\q\7\o\n\4\p\a\u\4\q\u\n\3\d\g\q\o\4\o\7\v\f\y\j\q\d\q\s\e\f\w\p\r\8\n\9\y\c\3\q\0\g\t\m\e\r\u\d\r\j\r\c\g\r\1\q\e\g\v\n\n\o\c\p\5\h\l\y\j\7\6\u\j\w\6\0\k\v\k\f\4\8\w\o\g\j\1\l\7\7\5\t\s\q\i\6\l\g\h\y\i\u\4\j\2\3\p\d\1\o\0\b\u\x\q\v\c\i\g\a\m\v\i\2\1\t\1\9\l\x\r\j\7\e\r\y\p\2\z\e\p\o\m\b\e\j\d\b\r\2\c\k\h\r\b\b\p\5\m\b\7\d\9\x\m\d\i\s\1\k\j\f\t\r\v\m\y\5\s\e\m\4\h\m\x\a\j\o\d\t\n\2\u\q\8\4\3\h\6\0\3\3\n\6\f\s\5\m\d\n\w\2\9\x\m\2\x\6\u\y\g\l\p\9\u\t\f\o\x\l\o\1\p\r\v\4\m\7\3\o\n\z\0\k\a\n\m\f\o\0\j\n\u\e\u\x\q\z\p\g\t\y ]] 00:08:39.767 02:53:18 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:39.768 02:53:18 -- dd/uring.sh@69 -- # [[ hf47mky2douy5wbmdfr7nzr6tsu5d985od46u4qdtepikrmkmss84ded5hyt6az3sf5lzhy2jeshyfxxdrlt5lopgm2u5r5u4wtzq5w9px45irnz53lsxmhmdypgmfp796huhgwhrjb9aiarqyv5eo5kde0jpqoqchym8l17usjzjy83wvskwze80dznipnofo0fb7qvpjh7xzrtrl2bcm8tssk71f0q94qlg7fosp6a0h675plh6ujwcgwyzy4m4nt37qdjj5tj1gvg960v0y1jxes3ct2m5qumwetbbkoqph5tddkfdd1mbl0b8x4zc7dt58w09ncgb1zxr0blv5eqo64xupx0tx5va0k0fd7we09sc1w79sgwm6i35iwwj4cnv5s9eiu3bdu49to6phmpgym400qt9up12w0pj51j3391z4ycdclv1abymx9yrlawuwbaxhe0fda7m0dkcja7mju0cib2r6isam72mx73blmvgobbyq4ufwvxbyrg8d3m06ud8a3y3tktqjsnws0z9qaextur63skzpqvoo7yvhmddvdalttgeyqowiw21uexl5jnecm8mkefnzo55sksg9p7lizju24imlld73j0bj7hxizdx6avib9burbnglpy8dvxv88jd28pt7abyxh2557m0xv318phkjfzyjcm3jcn8d0s6iz7fynzt5zvrrq57u7bq6rl67qw9c2t5ueipjxssbvfi5w8ltxacktmmwmelyt2t5zbcqq7on4pau4qun3dgqo4o7vfyjqdqsefwpr8n9yc3q0gtmerudrjrcgr1qegvnnocp5hlyj76ujw60kvkf48wogj1l775tsqi6lghyiu4j23pd1o0buxqvcigamvi21t19lxrj7eryp2zepombejdbr2ckhrbbp5mb7d9xmdis1kjftrvmy5sem4hmxajodtn2uq843h6033n6fs5mdnw29xm2x6uyglp9utfoxlo1prv4m73onz0kanmfo0jnueuxqzpgty == \h\f\4\7\m\k\y\2\d\o\u\y\5\w\b\m\d\f\r\7\n\z\r\6\t\s\u\5\d\9\8\5\o\d\4\6\u\4\q\d\t\e\p\i\k\r\m\k\m\s\s\8\4\d\e\d\5\h\y\t\6\a\z\3\s\f\5\l\z\h\y\2\j\e\s\h\y\f\x\x\d\r\l\t\5\l\o\p\g\m\2\u\5\r\5\u\4\w\t\z\q\5\w\9\p\x\4\5\i\r\n\z\5\3\l\s\x\m\h\m\d\y\p\g\m\f\p\7\9\6\h\u\h\g\w\h\r\j\b\9\a\i\a\r\q\y\v\5\e\o\5\k\d\e\0\j\p\q\o\q\c\h\y\m\8\l\1\7\u\s\j\z\j\y\8\3\w\v\s\k\w\z\e\8\0\d\z\n\i\p\n\o\f\o\0\f\b\7\q\v\p\j\h\7\x\z\r\t\r\l\2\b\c\m\8\t\s\s\k\7\1\f\0\q\9\4\q\l\g\7\f\o\s\p\6\a\0\h\6\7\5\p\l\h\6\u\j\w\c\g\w\y\z\y\4\m\4\n\t\3\7\q\d\j\j\5\t\j\1\g\v\g\9\6\0\v\0\y\1\j\x\e\s\3\c\t\2\m\5\q\u\m\w\e\t\b\b\k\o\q\p\h\5\t\d\d\k\f\d\d\1\m\b\l\0\b\8\x\4\z\c\7\d\t\5\8\w\0\9\n\c\g\b\1\z\x\r\0\b\l\v\5\e\q\o\6\4\x\u\p\x\0\t\x\5\v\a\0\k\0\f\d\7\w\e\0\9\s\c\1\w\7\9\s\g\w\m\6\i\3\5\i\w\w\j\4\c\n\v\5\s\9\e\i\u\3\b\d\u\4\9\t\o\6\p\h\m\p\g\y\m\4\0\0\q\t\9\u\p\1\2\w\0\p\j\5\1\j\3\3\9\1\z\4\y\c\d\c\l\v\1\a\b\y\m\x\9\y\r\l\a\w\u\w\b\a\x\h\e\0\f\d\a\7\m\0\d\k\c\j\a\7\m\j\u\0\c\i\b\2\r\6\i\s\a\m\7\2\m\x\7\3\b\l\m\v\g\o\b\b\y\q\4\u\f\w\v\x\b\y\r\g\8\d\3\m\0\6\u\d\8\a\3\y\3\t\k\t\q\j\s\n\w\s\0\z\9\q\a\e\x\t\u\r\6\3\s\k\z\p\q\v\o\o\7\y\v\h\m\d\d\v\d\a\l\t\t\g\e\y\q\o\w\i\w\2\1\u\e\x\l\5\j\n\e\c\m\8\m\k\e\f\n\z\o\5\5\s\k\s\g\9\p\7\l\i\z\j\u\2\4\i\m\l\l\d\7\3\j\0\b\j\7\h\x\i\z\d\x\6\a\v\i\b\9\b\u\r\b\n\g\l\p\y\8\d\v\x\v\8\8\j\d\2\8\p\t\7\a\b\y\x\h\2\5\5\7\m\0\x\v\3\1\8\p\h\k\j\f\z\y\j\c\m\3\j\c\n\8\d\0\s\6\i\z\7\f\y\n\z\t\5\z\v\r\r\q\5\7\u\7\b\q\6\r\l\6\7\q\w\9\c\2\t\5\u\e\i\p\j\x\s\s\b\v\f\i\5\w\8\l\t\x\a\c\k\t\m\m\w\m\e\l\y\t\2\t\5\z\b\c\q\q\7\o\n\4\p\a\u\4\q\u\n\3\d\g\q\o\4\o\7\v\f\y\j\q\d\q\s\e\f\w\p\r\8\n\9\y\c\3\q\0\g\t\m\e\r\u\d\r\j\r\c\g\r\1\q\e\g\v\n\n\o\c\p\5\h\l\y\j\7\6\u\j\w\6\0\k\v\k\f\4\8\w\o\g\j\1\l\7\7\5\t\s\q\i\6\l\g\h\y\i\u\4\j\2\3\p\d\1\o\0\b\u\x\q\v\c\i\g\a\m\v\i\2\1\t\1\9\l\x\r\j\7\e\r\y\p\2\z\e\p\o\m\b\e\j\d\b\r\2\c\k\h\r\b\b\p\5\m\b\7\d\9\x\m\d\i\s\1\k\j\f\t\r\v\m\y\5\s\e\m\4\h\m\x\a\j\o\d\t\n\2\u\q\8\4\3\h\6\0\3\3\n\6\f\s\5\m\d\n\w\2\9\x\m\2\x\6\u\y\g\l\p\9\u\t\f\o\x\l\o\1\p\r\v\4\m\7\3\o\n\z\0\k\a\n\m\f\o\0\j\n\u\e\u\x\q\z\p\g\t\y ]] 00:08:39.768 02:53:18 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:40.336 02:53:19 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:40.336 02:53:19 -- dd/uring.sh@75 -- # gen_conf 00:08:40.336 02:53:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:40.336 02:53:19 -- common/autotest_common.sh@10 -- # set +x 00:08:40.336 [2024-04-23 02:53:19.267655] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:40.336 [2024-04-23 02:53:19.267747] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77605 ] 00:08:40.336 { 00:08:40.336 "subsystems": [ 00:08:40.336 { 00:08:40.336 "subsystem": "bdev", 00:08:40.336 "config": [ 00:08:40.336 { 00:08:40.336 "params": { 00:08:40.336 "block_size": 512, 00:08:40.336 "num_blocks": 1048576, 00:08:40.336 "name": "malloc0" 00:08:40.336 }, 00:08:40.336 "method": "bdev_malloc_create" 00:08:40.336 }, 00:08:40.336 { 00:08:40.336 "params": { 00:08:40.336 "filename": "/dev/zram1", 00:08:40.336 "name": "uring0" 00:08:40.336 }, 00:08:40.336 "method": "bdev_uring_create" 00:08:40.336 }, 00:08:40.336 { 00:08:40.336 "method": "bdev_wait_for_examine" 00:08:40.336 } 00:08:40.336 ] 00:08:40.336 } 00:08:40.336 ] 00:08:40.336 } 00:08:40.336 [2024-04-23 02:53:19.389829] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:40.336 [2024-04-23 02:53:19.406957] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.336 [2024-04-23 02:53:19.437263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.844  Copying: 173/512 [MB] (173 MBps) Copying: 336/512 [MB] (162 MBps) Copying: 507/512 [MB] (170 MBps) Copying: 512/512 [MB] (average 169 MBps) 00:08:43.844 00:08:43.844 02:53:22 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:43.844 02:53:22 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:43.844 02:53:22 -- dd/uring.sh@87 -- # : 00:08:43.844 02:53:22 -- dd/uring.sh@87 -- # : 00:08:43.844 02:53:22 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:43.844 02:53:22 -- dd/uring.sh@87 -- # gen_conf 00:08:43.844 02:53:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:43.844 02:53:22 -- common/autotest_common.sh@10 -- # set +x 00:08:43.844 [2024-04-23 02:53:22.862832] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:43.844 [2024-04-23 02:53:22.862922] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77650 ] 00:08:43.844 { 00:08:43.844 "subsystems": [ 00:08:43.844 { 00:08:43.844 "subsystem": "bdev", 00:08:43.844 "config": [ 00:08:43.844 { 00:08:43.844 "params": { 00:08:43.844 "block_size": 512, 00:08:43.844 "num_blocks": 1048576, 00:08:43.844 "name": "malloc0" 00:08:43.844 }, 00:08:43.844 "method": "bdev_malloc_create" 00:08:43.844 }, 00:08:43.844 { 00:08:43.844 "params": { 00:08:43.844 "filename": "/dev/zram1", 00:08:43.844 "name": "uring0" 00:08:43.844 }, 00:08:43.844 "method": "bdev_uring_create" 00:08:43.844 }, 00:08:43.844 { 00:08:43.844 "params": { 00:08:43.844 "name": "uring0" 00:08:43.844 }, 00:08:43.844 "method": "bdev_uring_delete" 00:08:43.844 }, 00:08:43.844 { 00:08:43.844 "method": "bdev_wait_for_examine" 00:08:43.844 } 00:08:43.844 ] 00:08:43.844 } 00:08:43.844 ] 00:08:43.844 } 00:08:43.844 [2024-04-23 02:53:22.982691] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:43.844 [2024-04-23 02:53:22.994464] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.103 [2024-04-23 02:53:23.027736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.362  Copying: 0/0 [B] (average 0 Bps) 00:08:44.362 00:08:44.362 02:53:23 -- dd/uring.sh@94 -- # : 00:08:44.362 02:53:23 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.362 02:53:23 -- dd/uring.sh@94 -- # gen_conf 00:08:44.362 02:53:23 -- dd/common.sh@31 -- # xtrace_disable 00:08:44.362 02:53:23 -- common/autotest_common.sh@638 -- # local es=0 00:08:44.362 02:53:23 -- common/autotest_common.sh@10 -- # set +x 00:08:44.362 02:53:23 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.362 02:53:23 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.362 02:53:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:44.362 02:53:23 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.362 02:53:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:44.362 02:53:23 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.362 02:53:23 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:44.362 02:53:23 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:44.362 02:53:23 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:44.362 02:53:23 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:44.621 [2024-04-23 02:53:23.538052] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:44.621 [2024-04-23 02:53:23.538769] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77688 ] 00:08:44.621 { 00:08:44.621 "subsystems": [ 00:08:44.621 { 00:08:44.621 "subsystem": "bdev", 00:08:44.621 "config": [ 00:08:44.621 { 00:08:44.621 "params": { 00:08:44.621 "block_size": 512, 00:08:44.621 "num_blocks": 1048576, 00:08:44.621 "name": "malloc0" 00:08:44.621 }, 00:08:44.621 "method": "bdev_malloc_create" 00:08:44.621 }, 00:08:44.621 { 00:08:44.621 "params": { 00:08:44.621 "filename": "/dev/zram1", 00:08:44.621 "name": "uring0" 00:08:44.621 }, 00:08:44.621 "method": "bdev_uring_create" 00:08:44.621 }, 00:08:44.621 { 00:08:44.621 "params": { 00:08:44.621 "name": "uring0" 00:08:44.621 }, 00:08:44.621 "method": "bdev_uring_delete" 00:08:44.621 }, 00:08:44.621 { 00:08:44.621 "method": "bdev_wait_for_examine" 00:08:44.621 } 00:08:44.621 ] 00:08:44.621 } 00:08:44.621 ] 00:08:44.621 } 00:08:44.621 [2024-04-23 02:53:23.659522] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:44.621 [2024-04-23 02:53:23.678136] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.621 [2024-04-23 02:53:23.708431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.879 [2024-04-23 02:53:23.852745] bdev.c:8067:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:44.879 [2024-04-23 02:53:23.852793] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:44.879 [2024-04-23 02:53:23.852819] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:44.879 [2024-04-23 02:53:23.852828] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.879 [2024-04-23 02:53:24.004332] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:45.138 02:53:24 -- common/autotest_common.sh@641 -- # es=237 00:08:45.138 02:53:24 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:45.138 02:53:24 -- common/autotest_common.sh@650 -- # es=109 00:08:45.138 02:53:24 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:45.138 02:53:24 -- common/autotest_common.sh@658 -- # es=1 00:08:45.138 02:53:24 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:45.138 02:53:24 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:45.138 02:53:24 -- dd/common.sh@172 -- # local id=1 00:08:45.139 02:53:24 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:45.139 02:53:24 -- dd/common.sh@176 -- # echo 1 00:08:45.139 02:53:24 -- dd/common.sh@177 -- # echo 1 00:08:45.139 02:53:24 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:45.415 00:08:45.415 ************************************ 00:08:45.415 END TEST dd_uring_copy 00:08:45.415 ************************************ 00:08:45.415 real 0m12.757s 00:08:45.415 user 0m8.590s 00:08:45.415 sys 0m11.141s 00:08:45.415 02:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.415 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 00:08:45.415 real 0m12.963s 00:08:45.415 user 0m8.667s 00:08:45.415 sys 0m11.258s 00:08:45.415 02:53:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:45.415 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 ************************************ 00:08:45.415 END TEST spdk_dd_uring 00:08:45.415 ************************************ 00:08:45.415 02:53:24 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:45.415 02:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.415 02:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.415 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.415 ************************************ 00:08:45.415 START TEST spdk_dd_sparse 00:08:45.415 ************************************ 00:08:45.415 02:53:24 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:45.415 * Looking for test storage... 00:08:45.415 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:45.415 02:53:24 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:45.415 02:53:24 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.415 02:53:24 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.415 02:53:24 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.415 02:53:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.415 02:53:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.415 02:53:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.415 02:53:24 -- paths/export.sh@5 -- # export PATH 00:08:45.415 02:53:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.415 02:53:24 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:45.415 02:53:24 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:45.415 02:53:24 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:45.415 02:53:24 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:45.415 02:53:24 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:45.679 02:53:24 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:45.679 02:53:24 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:45.679 02:53:24 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:45.679 02:53:24 -- dd/sparse.sh@118 -- # prepare 00:08:45.679 02:53:24 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:45.679 02:53:24 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:45.679 1+0 records in 00:08:45.679 1+0 records out 00:08:45.679 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00659494 s, 636 MB/s 00:08:45.679 02:53:24 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:45.679 1+0 records in 00:08:45.679 1+0 records out 00:08:45.679 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00328891 s, 1.3 GB/s 00:08:45.679 02:53:24 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:45.679 1+0 records in 00:08:45.679 1+0 records out 00:08:45.679 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00622714 s, 674 MB/s 00:08:45.679 02:53:24 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:45.679 02:53:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:45.679 02:53:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.679 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.679 ************************************ 00:08:45.679 START TEST dd_sparse_file_to_file 00:08:45.679 ************************************ 00:08:45.679 02:53:24 -- common/autotest_common.sh@1111 -- # file_to_file 00:08:45.679 02:53:24 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:45.679 02:53:24 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:45.679 02:53:24 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:45.679 02:53:24 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:45.679 02:53:24 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:45.679 02:53:24 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:45.679 02:53:24 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:45.679 02:53:24 -- dd/sparse.sh@41 -- # gen_conf 00:08:45.679 02:53:24 -- dd/common.sh@31 -- # xtrace_disable 00:08:45.679 02:53:24 -- common/autotest_common.sh@10 -- # set +x 00:08:45.679 [2024-04-23 02:53:24.717659] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:45.679 [2024-04-23 02:53:24.718172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77783 ] 00:08:45.679 { 00:08:45.679 "subsystems": [ 00:08:45.679 { 00:08:45.679 "subsystem": "bdev", 00:08:45.679 "config": [ 00:08:45.679 { 00:08:45.679 "params": { 00:08:45.679 "block_size": 4096, 00:08:45.679 "filename": "dd_sparse_aio_disk", 00:08:45.679 "name": "dd_aio" 00:08:45.679 }, 00:08:45.679 "method": "bdev_aio_create" 00:08:45.679 }, 00:08:45.679 { 00:08:45.679 "params": { 00:08:45.679 "lvs_name": "dd_lvstore", 00:08:45.679 "bdev_name": "dd_aio" 00:08:45.679 }, 00:08:45.679 "method": "bdev_lvol_create_lvstore" 00:08:45.679 }, 00:08:45.679 { 00:08:45.679 "method": "bdev_wait_for_examine" 00:08:45.679 } 00:08:45.679 ] 00:08:45.679 } 00:08:45.679 ] 00:08:45.679 } 00:08:45.939 [2024-04-23 02:53:24.839025] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:45.939 [2024-04-23 02:53:24.858179] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.939 [2024-04-23 02:53:24.888493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.198  Copying: 12/36 [MB] (average 1090 MBps) 00:08:46.198 00:08:46.198 02:53:25 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:46.198 02:53:25 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:46.198 02:53:25 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:46.198 02:53:25 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:46.198 02:53:25 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:46.198 02:53:25 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:46.198 02:53:25 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:46.198 02:53:25 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:46.198 02:53:25 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:46.198 02:53:25 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:46.198 00:08:46.198 real 0m0.490s 00:08:46.198 user 0m0.299s 00:08:46.198 sys 0m0.225s 00:08:46.198 02:53:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.198 ************************************ 00:08:46.198 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.198 END TEST dd_sparse_file_to_file 00:08:46.198 ************************************ 00:08:46.198 02:53:25 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:46.198 02:53:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.198 02:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.198 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.198 ************************************ 00:08:46.198 START TEST dd_sparse_file_to_bdev 00:08:46.198 ************************************ 00:08:46.198 02:53:25 -- common/autotest_common.sh@1111 -- # file_to_bdev 00:08:46.198 02:53:25 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:46.198 02:53:25 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:46.198 02:53:25 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:46.198 02:53:25 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:46.198 02:53:25 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:46.198 02:53:25 -- dd/sparse.sh@73 -- # gen_conf 00:08:46.198 02:53:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.198 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.198 [2024-04-23 02:53:25.313915] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:46.198 [2024-04-23 02:53:25.314023] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77835 ] 00:08:46.198 { 00:08:46.198 "subsystems": [ 00:08:46.198 { 00:08:46.198 "subsystem": "bdev", 00:08:46.198 "config": [ 00:08:46.198 { 00:08:46.198 "params": { 00:08:46.198 "block_size": 4096, 00:08:46.198 "filename": "dd_sparse_aio_disk", 00:08:46.198 "name": "dd_aio" 00:08:46.198 }, 00:08:46.198 "method": "bdev_aio_create" 00:08:46.198 }, 00:08:46.198 { 00:08:46.198 "params": { 00:08:46.198 "lvs_name": "dd_lvstore", 00:08:46.198 "lvol_name": "dd_lvol", 00:08:46.198 "size": 37748736, 00:08:46.198 "thin_provision": true 00:08:46.198 }, 00:08:46.198 "method": "bdev_lvol_create" 00:08:46.198 }, 00:08:46.198 { 00:08:46.198 "method": "bdev_wait_for_examine" 00:08:46.198 } 00:08:46.198 ] 00:08:46.198 } 00:08:46.198 ] 00:08:46.198 } 00:08:46.458 [2024-04-23 02:53:25.435085] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.458 [2024-04-23 02:53:25.454506] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.458 [2024-04-23 02:53:25.485035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.458 [2024-04-23 02:53:25.547629] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:46.458  Copying: 12/36 [MB] (average 521 MBps)[2024-04-23 02:53:25.586144] app.c: 937:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:46.717 00:08:46.717 00:08:46.717 00:08:46.717 real 0m0.470s 00:08:46.717 user 0m0.287s 00:08:46.717 sys 0m0.232s 00:08:46.717 02:53:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.717 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.717 ************************************ 00:08:46.717 END TEST dd_sparse_file_to_bdev 00:08:46.717 ************************************ 00:08:46.717 02:53:25 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:46.717 02:53:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:46.717 02:53:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.717 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.717 ************************************ 00:08:46.717 START TEST dd_sparse_bdev_to_file 00:08:46.717 ************************************ 00:08:46.717 02:53:25 -- common/autotest_common.sh@1111 -- # bdev_to_file 00:08:46.717 02:53:25 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:46.717 02:53:25 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:46.717 02:53:25 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:46.717 02:53:25 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:46.717 02:53:25 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:46.717 02:53:25 -- dd/sparse.sh@91 -- # gen_conf 00:08:46.717 02:53:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:46.717 02:53:25 -- common/autotest_common.sh@10 -- # set +x 00:08:46.976 [2024-04-23 02:53:25.894730] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:46.976 [2024-04-23 02:53:25.894834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77866 ] 00:08:46.976 { 00:08:46.976 "subsystems": [ 00:08:46.976 { 00:08:46.976 "subsystem": "bdev", 00:08:46.976 "config": [ 00:08:46.976 { 00:08:46.976 "params": { 00:08:46.976 "block_size": 4096, 00:08:46.976 "filename": "dd_sparse_aio_disk", 00:08:46.976 "name": "dd_aio" 00:08:46.976 }, 00:08:46.976 "method": "bdev_aio_create" 00:08:46.976 }, 00:08:46.976 { 00:08:46.976 "method": "bdev_wait_for_examine" 00:08:46.976 } 00:08:46.976 ] 00:08:46.976 } 00:08:46.976 ] 00:08:46.976 } 00:08:46.976 [2024-04-23 02:53:26.016060] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:46.976 [2024-04-23 02:53:26.036709] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.976 [2024-04-23 02:53:26.076181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.235  Copying: 12/36 [MB] (average 1090 MBps) 00:08:47.235 00:08:47.235 02:53:26 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:47.235 02:53:26 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:47.235 02:53:26 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:47.235 02:53:26 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:47.235 02:53:26 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:47.235 02:53:26 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:47.235 02:53:26 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:47.235 02:53:26 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:47.235 02:53:26 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:47.235 02:53:26 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:47.235 00:08:47.235 real 0m0.510s 00:08:47.235 user 0m0.305s 00:08:47.235 sys 0m0.268s 00:08:47.235 02:53:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.235 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.235 ************************************ 00:08:47.235 END TEST dd_sparse_bdev_to_file 00:08:47.235 ************************************ 00:08:47.495 02:53:26 -- dd/sparse.sh@1 -- # cleanup 00:08:47.495 02:53:26 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:47.495 02:53:26 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:47.495 02:53:26 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:47.495 02:53:26 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:47.495 00:08:47.495 real 0m1.948s 00:08:47.495 user 0m1.052s 00:08:47.495 sys 0m0.996s 00:08:47.495 02:53:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.495 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 ************************************ 00:08:47.495 END TEST spdk_dd_sparse 00:08:47.495 ************************************ 00:08:47.495 02:53:26 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:47.495 02:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.495 02:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.495 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.495 ************************************ 00:08:47.495 START TEST spdk_dd_negative 00:08:47.495 ************************************ 00:08:47.495 02:53:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:47.495 * Looking for test storage... 00:08:47.495 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:47.495 02:53:26 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:47.495 02:53:26 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.495 02:53:26 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.495 02:53:26 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.495 02:53:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.495 02:53:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.495 02:53:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.495 02:53:26 -- paths/export.sh@5 -- # export PATH 00:08:47.495 02:53:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.495 02:53:26 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.495 02:53:26 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.495 02:53:26 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:47.495 02:53:26 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:47.495 02:53:26 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:47.495 02:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.495 02:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.495 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.754 ************************************ 00:08:47.754 START TEST dd_invalid_arguments 00:08:47.754 ************************************ 00:08:47.754 02:53:26 -- common/autotest_common.sh@1111 -- # invalid_arguments 00:08:47.755 02:53:26 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:47.755 02:53:26 -- common/autotest_common.sh@638 -- # local es=0 00:08:47.755 02:53:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:47.755 02:53:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.755 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:47.755 02:53:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.755 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:47.755 02:53:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.755 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:47.755 02:53:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.755 02:53:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:47.755 02:53:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:47.755 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:47.755 00:08:47.755 CPU options: 00:08:47.755 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:47.755 (like [0,1,10]) 00:08:47.755 --lcores lcore to CPU mapping list. The list is in the format: 00:08:47.755 [<,lcores[@CPUs]>...] 00:08:47.755 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:47.755 Within the group, '-' is used for range separator, 00:08:47.755 ',' is used for single number separator. 00:08:47.755 '( )' can be omitted for single element group, 00:08:47.755 '@' can be omitted if cpus and lcores have the same value 00:08:47.755 --disable-cpumask-locks Disable CPU core lock files. 00:08:47.755 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:47.755 pollers in the app support interrupt mode) 00:08:47.755 -p, --main-core main (primary) core for DPDK 00:08:47.755 00:08:47.755 Configuration options: 00:08:47.755 -c, --config, --json JSON config file 00:08:47.755 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:47.755 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:47.755 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:47.755 --rpcs-allowed comma-separated list of permitted RPCS 00:08:47.755 --json-ignore-init-errors don't exit on invalid config entry 00:08:47.755 00:08:47.755 Memory options: 00:08:47.755 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:47.755 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:47.755 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:47.755 -R, --huge-unlink unlink huge files after initialization 00:08:47.755 -n, --mem-channels number of memory channels used for DPDK 00:08:47.755 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:47.755 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:47.755 --no-huge run without using hugepages 00:08:47.755 -i, --shm-id shared memory ID (optional) 00:08:47.755 -g, --single-file-segments force creating just one hugetlbfs file 00:08:47.755 00:08:47.755 PCI options: 00:08:47.755 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:47.755 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:47.755 -u, --no-pci disable PCI access 00:08:47.755 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:47.755 00:08:47.755 Log options: 00:08:47.755 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:47.755 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:47.755 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:47.755 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:47.755 blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, 00:08:47.755 json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, 00:08:47.755 nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, sock_posix, 00:08:47.755 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:08:47.755 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, 00:08:47.755 virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, 00:08:47.755 virtio_vfio_user, vmd) 00:08:47.755 --silence-noticelog disable notice level logging to stderr 00:08:47.755 00:08:47.755 Trace options: 00:08:47.755 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:47.755 setting 0 to disable trace (default 32768) 00:08:47.755 Tracepoints vary in size and can use more than one trace entry. 00:08:47.755 -e, --tpoint-group [:] 00:08:47.755 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:47.755 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:47.755 [2024-04-23 02:53:26.767648] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:47.755 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, all). 00:08:47.755 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:47.755 a tracepoint group. First tpoint inside a group can be enabled by 00:08:47.755 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:47.755 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:47.755 in /include/spdk_internal/trace_defs.h 00:08:47.755 00:08:47.755 Other options: 00:08:47.755 -h, --help show this usage 00:08:47.755 -v, --version print SPDK version 00:08:47.755 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:47.755 --env-context Opaque context for use of the env implementation 00:08:47.755 00:08:47.755 Application specific: 00:08:47.755 [--------- DD Options ---------] 00:08:47.755 --if Input file. Must specify either --if or --ib. 00:08:47.755 --ib Input bdev. Must specifier either --if or --ib 00:08:47.755 --of Output file. Must specify either --of or --ob. 00:08:47.755 --ob Output bdev. Must specify either --of or --ob. 00:08:47.755 --iflag Input file flags. 00:08:47.755 --oflag Output file flags. 00:08:47.755 --bs I/O unit size (default: 4096) 00:08:47.755 --qd Queue depth (default: 2) 00:08:47.755 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:47.755 --skip Skip this many I/O units at start of input. (default: 0) 00:08:47.755 --seek Skip this many I/O units at start of output. (default: 0) 00:08:47.755 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:47.755 --sparse Enable hole skipping in input target 00:08:47.755 Available iflag and oflag values: 00:08:47.755 append - append mode 00:08:47.755 direct - use direct I/O for data 00:08:47.755 directory - fail unless a directory 00:08:47.755 dsync - use synchronized I/O for data 00:08:47.755 noatime - do not update access time 00:08:47.755 noctty - do not assign controlling terminal from file 00:08:47.755 nofollow - do not follow symlinks 00:08:47.755 nonblock - use non-blocking I/O 00:08:47.755 sync - use synchronized I/O for data and metadata 00:08:47.755 02:53:26 -- common/autotest_common.sh@641 -- # es=2 00:08:47.755 02:53:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:47.755 02:53:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:47.755 02:53:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:47.755 00:08:47.755 real 0m0.068s 00:08:47.755 user 0m0.046s 00:08:47.755 sys 0m0.021s 00:08:47.755 02:53:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:47.755 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.755 ************************************ 00:08:47.755 END TEST dd_invalid_arguments 00:08:47.755 ************************************ 00:08:47.755 02:53:26 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:47.755 02:53:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:47.755 02:53:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:47.755 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:47.755 ************************************ 00:08:47.755 START TEST dd_double_input 00:08:47.755 ************************************ 00:08:47.755 02:53:26 -- common/autotest_common.sh@1111 -- # double_input 00:08:47.755 02:53:26 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:47.755 02:53:26 -- common/autotest_common.sh@638 -- # local es=0 00:08:47.755 02:53:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:47.756 02:53:26 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.756 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:47.756 02:53:26 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:47.756 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:47.756 02:53:26 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.015 02:53:26 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:26 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.015 02:53:26 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:48.015 [2024-04-23 02:53:26.959381] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:48.015 02:53:26 -- common/autotest_common.sh@641 -- # es=22 00:08:48.015 02:53:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:48.015 02:53:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:48.015 02:53:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:48.015 00:08:48.015 real 0m0.070s 00:08:48.015 user 0m0.044s 00:08:48.015 sys 0m0.026s 00:08:48.015 02:53:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.015 ************************************ 00:08:48.015 END TEST dd_double_input 00:08:48.015 ************************************ 00:08:48.015 02:53:26 -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 02:53:27 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:48.015 02:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.015 02:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.015 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.015 ************************************ 00:08:48.015 START TEST dd_double_output 00:08:48.015 ************************************ 00:08:48.015 02:53:27 -- common/autotest_common.sh@1111 -- # double_output 00:08:48.015 02:53:27 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.015 02:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:08:48.015 02:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.015 02:53:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.015 02:53:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.015 02:53:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.015 02:53:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.015 02:53:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.015 02:53:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:48.015 [2024-04-23 02:53:27.134764] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:48.015 02:53:27 -- common/autotest_common.sh@641 -- # es=22 00:08:48.015 02:53:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:48.015 02:53:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:48.015 02:53:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:48.015 00:08:48.015 real 0m0.063s 00:08:48.015 user 0m0.038s 00:08:48.015 sys 0m0.024s 00:08:48.015 02:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.015 ************************************ 00:08:48.015 END TEST dd_double_output 00:08:48.015 ************************************ 00:08:48.015 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.274 02:53:27 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:48.274 02:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.274 02:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.274 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.274 ************************************ 00:08:48.274 START TEST dd_no_input 00:08:48.274 ************************************ 00:08:48.274 02:53:27 -- common/autotest_common.sh@1111 -- # no_input 00:08:48.274 02:53:27 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.274 02:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:08:48.274 02:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.274 02:53:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.274 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.274 02:53:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.274 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.274 02:53:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.274 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.274 02:53:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.274 02:53:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.274 02:53:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:48.274 [2024-04-23 02:53:27.317505] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:48.274 02:53:27 -- common/autotest_common.sh@641 -- # es=22 00:08:48.274 02:53:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:48.274 02:53:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:48.274 02:53:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:48.274 00:08:48.274 real 0m0.068s 00:08:48.274 user 0m0.047s 00:08:48.274 sys 0m0.021s 00:08:48.274 02:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.274 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.274 ************************************ 00:08:48.274 END TEST dd_no_input 00:08:48.274 ************************************ 00:08:48.274 02:53:27 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:48.274 02:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.274 02:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.274 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.534 ************************************ 00:08:48.534 START TEST dd_no_output 00:08:48.534 ************************************ 00:08:48.534 02:53:27 -- common/autotest_common.sh@1111 -- # no_output 00:08:48.534 02:53:27 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.534 02:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:08:48.534 02:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.534 02:53:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.534 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.534 02:53:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.534 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.534 02:53:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.534 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.535 02:53:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.535 02:53:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.535 02:53:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:48.535 [2024-04-23 02:53:27.502186] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:48.535 02:53:27 -- common/autotest_common.sh@641 -- # es=22 00:08:48.535 02:53:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:48.535 02:53:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:48.535 02:53:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:48.535 00:08:48.535 real 0m0.068s 00:08:48.535 user 0m0.041s 00:08:48.535 sys 0m0.025s 00:08:48.535 02:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.535 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.535 ************************************ 00:08:48.535 END TEST dd_no_output 00:08:48.535 ************************************ 00:08:48.535 02:53:27 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:48.535 02:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.535 02:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.535 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.535 ************************************ 00:08:48.535 START TEST dd_wrong_blocksize 00:08:48.535 ************************************ 00:08:48.535 02:53:27 -- common/autotest_common.sh@1111 -- # wrong_blocksize 00:08:48.535 02:53:27 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.535 02:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:08:48.535 02:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.535 02:53:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.535 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.535 02:53:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.535 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.535 02:53:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.535 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.535 02:53:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.535 02:53:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.535 02:53:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:48.535 [2024-04-23 02:53:27.691406] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:48.796 02:53:27 -- common/autotest_common.sh@641 -- # es=22 00:08:48.796 02:53:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:48.796 02:53:27 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:48.796 02:53:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:48.796 00:08:48.796 real 0m0.072s 00:08:48.796 user 0m0.044s 00:08:48.796 sys 0m0.026s 00:08:48.796 02:53:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:48.796 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.796 ************************************ 00:08:48.796 END TEST dd_wrong_blocksize 00:08:48.796 ************************************ 00:08:48.796 02:53:27 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:48.796 02:53:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:48.796 02:53:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:48.796 02:53:27 -- common/autotest_common.sh@10 -- # set +x 00:08:48.796 ************************************ 00:08:48.796 START TEST dd_smaller_blocksize 00:08:48.796 ************************************ 00:08:48.796 02:53:27 -- common/autotest_common.sh@1111 -- # smaller_blocksize 00:08:48.796 02:53:27 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.796 02:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:08:48.796 02:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.796 02:53:27 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.796 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.796 02:53:27 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.796 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.796 02:53:27 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.796 02:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:48.796 02:53:27 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:48.796 02:53:27 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:48.796 02:53:27 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:48.796 [2024-04-23 02:53:27.876492] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:48.796 [2024-04-23 02:53:27.876582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78126 ] 00:08:49.055 [2024-04-23 02:53:27.997236] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:49.055 [2024-04-23 02:53:28.017769] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.055 [2024-04-23 02:53:28.056858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.055 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:49.055 [2024-04-23 02:53:28.104768] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:49.055 [2024-04-23 02:53:28.104808] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:49.055 [2024-04-23 02:53:28.165759] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:49.314 02:53:28 -- common/autotest_common.sh@641 -- # es=244 00:08:49.314 02:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:49.314 02:53:28 -- common/autotest_common.sh@650 -- # es=116 00:08:49.314 02:53:28 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:49.314 02:53:28 -- common/autotest_common.sh@658 -- # es=1 00:08:49.314 02:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:49.314 00:08:49.314 real 0m0.413s 00:08:49.314 user 0m0.207s 00:08:49.314 sys 0m0.101s 00:08:49.314 02:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.314 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.314 ************************************ 00:08:49.314 END TEST dd_smaller_blocksize 00:08:49.314 ************************************ 00:08:49.314 02:53:28 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:49.314 02:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.314 02:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.314 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.314 ************************************ 00:08:49.314 START TEST dd_invalid_count 00:08:49.314 ************************************ 00:08:49.314 02:53:28 -- common/autotest_common.sh@1111 -- # invalid_count 00:08:49.314 02:53:28 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:49.314 02:53:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:49.314 02:53:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:49.314 02:53:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.314 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.314 02:53:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.314 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.314 02:53:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.314 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.314 02:53:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.314 02:53:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.314 02:53:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:49.314 [2024-04-23 02:53:28.395465] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:49.314 02:53:28 -- common/autotest_common.sh@641 -- # es=22 00:08:49.314 02:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:49.314 02:53:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:49.314 02:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:49.314 00:08:49.314 real 0m0.066s 00:08:49.314 user 0m0.044s 00:08:49.314 sys 0m0.021s 00:08:49.314 02:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.314 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.314 ************************************ 00:08:49.314 END TEST dd_invalid_count 00:08:49.314 ************************************ 00:08:49.314 02:53:28 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:49.314 02:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.314 02:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.314 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.573 ************************************ 00:08:49.573 START TEST dd_invalid_oflag 00:08:49.573 ************************************ 00:08:49.573 02:53:28 -- common/autotest_common.sh@1111 -- # invalid_oflag 00:08:49.573 02:53:28 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:49.573 02:53:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:49.573 02:53:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:49.573 02:53:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.573 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.573 02:53:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.573 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.573 02:53:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.573 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.573 02:53:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.573 02:53:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.573 02:53:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:49.573 [2024-04-23 02:53:28.594626] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:49.573 02:53:28 -- common/autotest_common.sh@641 -- # es=22 00:08:49.573 02:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:49.573 02:53:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:49.573 02:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:49.573 00:08:49.573 real 0m0.070s 00:08:49.573 user 0m0.046s 00:08:49.573 sys 0m0.023s 00:08:49.573 02:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.573 ************************************ 00:08:49.573 END TEST dd_invalid_oflag 00:08:49.573 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.573 ************************************ 00:08:49.573 02:53:28 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:49.573 02:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.573 02:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.573 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 ************************************ 00:08:49.832 START TEST dd_invalid_iflag 00:08:49.832 ************************************ 00:08:49.832 02:53:28 -- common/autotest_common.sh@1111 -- # invalid_iflag 00:08:49.832 02:53:28 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:49.832 02:53:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:49.832 02:53:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:49.832 02:53:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.832 02:53:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:49.832 [2024-04-23 02:53:28.785526] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:49.832 02:53:28 -- common/autotest_common.sh@641 -- # es=22 00:08:49.832 02:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:49.832 02:53:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:08:49.832 02:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:49.832 00:08:49.832 real 0m0.071s 00:08:49.832 user 0m0.039s 00:08:49.832 sys 0m0.031s 00:08:49.832 02:53:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:49.832 ************************************ 00:08:49.832 END TEST dd_invalid_iflag 00:08:49.832 ************************************ 00:08:49.832 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 02:53:28 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:49.832 02:53:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:49.832 02:53:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.832 02:53:28 -- common/autotest_common.sh@10 -- # set +x 00:08:49.832 ************************************ 00:08:49.832 START TEST dd_unknown_flag 00:08:49.832 ************************************ 00:08:49.832 02:53:28 -- common/autotest_common.sh@1111 -- # unknown_flag 00:08:49.832 02:53:28 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:49.832 02:53:28 -- common/autotest_common.sh@638 -- # local es=0 00:08:49.832 02:53:28 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:49.832 02:53:28 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:49.832 02:53:28 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:49.832 02:53:28 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:49.832 [2024-04-23 02:53:28.974145] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:49.833 [2024-04-23 02:53:28.974231] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78235 ] 00:08:50.092 [2024-04-23 02:53:29.095294] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.092 [2024-04-23 02:53:29.116798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.092 [2024-04-23 02:53:29.155396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.092 [2024-04-23 02:53:29.203045] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:50.092 [2024-04-23 02:53:29.203122] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.092 [2024-04-23 02:53:29.203248] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:50.092 [2024-04-23 02:53:29.203284] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.092 [2024-04-23 02:53:29.203623] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:50.092 [2024-04-23 02:53:29.203652] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.092 [2024-04-23 02:53:29.203727] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:50.092 [2024-04-23 02:53:29.203748] app.c: 953:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:50.351 [2024-04-23 02:53:29.269322] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:50.351 02:53:29 -- common/autotest_common.sh@641 -- # es=234 00:08:50.351 02:53:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:50.351 02:53:29 -- common/autotest_common.sh@650 -- # es=106 00:08:50.351 02:53:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:50.351 02:53:29 -- common/autotest_common.sh@658 -- # es=1 00:08:50.351 ************************************ 00:08:50.351 END TEST dd_unknown_flag 00:08:50.351 ************************************ 00:08:50.351 02:53:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:50.351 00:08:50.351 real 0m0.424s 00:08:50.351 user 0m0.229s 00:08:50.351 sys 0m0.099s 00:08:50.351 02:53:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.351 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.351 02:53:29 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:50.351 02:53:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:50.351 02:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.351 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.351 ************************************ 00:08:50.351 START TEST dd_invalid_json 00:08:50.351 ************************************ 00:08:50.351 02:53:29 -- common/autotest_common.sh@1111 -- # invalid_json 00:08:50.351 02:53:29 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.351 02:53:29 -- dd/negative_dd.sh@95 -- # : 00:08:50.351 02:53:29 -- common/autotest_common.sh@638 -- # local es=0 00:08:50.351 02:53:29 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.351 02:53:29 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.351 02:53:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:50.351 02:53:29 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.351 02:53:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:50.351 02:53:29 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.351 02:53:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:08:50.351 02:53:29 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:50.351 02:53:29 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:50.351 02:53:29 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:50.610 [2024-04-23 02:53:29.519322] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:50.610 [2024-04-23 02:53:29.519416] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78273 ] 00:08:50.610 [2024-04-23 02:53:29.640852] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:50.610 [2024-04-23 02:53:29.661449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.610 [2024-04-23 02:53:29.699796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.610 [2024-04-23 02:53:29.699885] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:50.610 [2024-04-23 02:53:29.699913] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:50.610 [2024-04-23 02:53:29.699930] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:50.610 [2024-04-23 02:53:29.699989] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:50.610 02:53:29 -- common/autotest_common.sh@641 -- # es=234 00:08:50.610 02:53:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:08:50.610 02:53:29 -- common/autotest_common.sh@650 -- # es=106 00:08:50.869 ************************************ 00:08:50.869 END TEST dd_invalid_json 00:08:50.869 ************************************ 00:08:50.869 02:53:29 -- common/autotest_common.sh@651 -- # case "$es" in 00:08:50.869 02:53:29 -- common/autotest_common.sh@658 -- # es=1 00:08:50.869 02:53:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:08:50.869 00:08:50.869 real 0m0.306s 00:08:50.869 user 0m0.141s 00:08:50.869 sys 0m0.063s 00:08:50.869 02:53:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.869 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 ************************************ 00:08:50.869 END TEST spdk_dd_negative 00:08:50.869 ************************************ 00:08:50.869 00:08:50.869 real 0m3.267s 00:08:50.869 user 0m1.502s 00:08:50.869 sys 0m1.238s 00:08:50.869 02:53:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.869 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 ************************************ 00:08:50.869 END TEST spdk_dd 00:08:50.869 ************************************ 00:08:50.869 00:08:50.869 real 1m1.285s 00:08:50.869 user 0m37.963s 00:08:50.869 sys 0m26.437s 00:08:50.869 02:53:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:50.869 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 02:53:29 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@258 -- # timing_exit lib 00:08:50.869 02:53:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:50.869 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 02:53:29 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:08:50.869 02:53:29 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:08:50.869 02:53:29 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:50.869 02:53:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:50.869 02:53:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:50.869 02:53:29 -- common/autotest_common.sh@10 -- # set +x 00:08:50.869 ************************************ 00:08:50.869 START TEST nvmf_tcp 00:08:50.869 ************************************ 00:08:50.869 02:53:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:51.128 * Looking for test storage... 00:08:51.128 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.128 02:53:30 -- nvmf/common.sh@7 -- # uname -s 00:08:51.128 02:53:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.128 02:53:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.128 02:53:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.128 02:53:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.128 02:53:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.128 02:53:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.128 02:53:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.128 02:53:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.128 02:53:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.128 02:53:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.128 02:53:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:51.128 02:53:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:51.128 02:53:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.128 02:53:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.128 02:53:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.128 02:53:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.128 02:53:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.128 02:53:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.128 02:53:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.128 02:53:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.128 02:53:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.128 02:53:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.128 02:53:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.128 02:53:30 -- paths/export.sh@5 -- # export PATH 00:08:51.128 02:53:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.128 02:53:30 -- nvmf/common.sh@47 -- # : 0 00:08:51.128 02:53:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.128 02:53:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.128 02:53:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.128 02:53:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.128 02:53:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.128 02:53:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.128 02:53:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.128 02:53:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:51.128 02:53:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.128 02:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:51.128 02:53:30 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.128 02:53:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:51.128 02:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.128 02:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.128 ************************************ 00:08:51.128 START TEST nvmf_host_management 00:08:51.128 ************************************ 00:08:51.128 02:53:30 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:51.388 * Looking for test storage... 00:08:51.388 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.388 02:53:30 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.388 02:53:30 -- nvmf/common.sh@7 -- # uname -s 00:08:51.388 02:53:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.388 02:53:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.388 02:53:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.388 02:53:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.388 02:53:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.389 02:53:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.389 02:53:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.389 02:53:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.389 02:53:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.389 02:53:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.389 02:53:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:51.389 02:53:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:51.389 02:53:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.389 02:53:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.389 02:53:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.389 02:53:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.389 02:53:30 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.389 02:53:30 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.389 02:53:30 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.389 02:53:30 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.389 02:53:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.389 02:53:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.389 02:53:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.389 02:53:30 -- paths/export.sh@5 -- # export PATH 00:08:51.389 02:53:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.389 02:53:30 -- nvmf/common.sh@47 -- # : 0 00:08:51.389 02:53:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:51.389 02:53:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:51.389 02:53:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.389 02:53:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.389 02:53:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.389 02:53:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:51.389 02:53:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:51.389 02:53:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:51.389 02:53:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.389 02:53:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.389 02:53:30 -- target/host_management.sh@105 -- # nvmftestinit 00:08:51.389 02:53:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:51.389 02:53:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.389 02:53:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:51.389 02:53:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:51.389 02:53:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:51.389 02:53:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.389 02:53:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:51.389 02:53:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.389 02:53:30 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:51.389 02:53:30 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:51.389 02:53:30 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:51.389 02:53:30 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:51.389 02:53:30 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:51.389 02:53:30 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:51.389 02:53:30 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.389 02:53:30 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.389 02:53:30 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:51.389 02:53:30 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:51.389 02:53:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.389 02:53:30 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.389 02:53:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.389 02:53:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.389 02:53:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.389 02:53:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.389 02:53:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.389 02:53:30 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.389 02:53:30 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:51.389 Cannot find device "nvmf_init_br" 00:08:51.389 02:53:30 -- nvmf/common.sh@154 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:51.389 Cannot find device "nvmf_tgt_br" 00:08:51.389 02:53:30 -- nvmf/common.sh@155 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.389 Cannot find device "nvmf_tgt_br2" 00:08:51.389 02:53:30 -- nvmf/common.sh@156 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:51.389 Cannot find device "nvmf_init_br" 00:08:51.389 02:53:30 -- nvmf/common.sh@157 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:51.389 Cannot find device "nvmf_tgt_br" 00:08:51.389 02:53:30 -- nvmf/common.sh@158 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:51.389 Cannot find device "nvmf_tgt_br2" 00:08:51.389 02:53:30 -- nvmf/common.sh@159 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:51.389 Cannot find device "nvmf_br" 00:08:51.389 02:53:30 -- nvmf/common.sh@160 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:51.389 Cannot find device "nvmf_init_if" 00:08:51.389 02:53:30 -- nvmf/common.sh@161 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.389 02:53:30 -- nvmf/common.sh@162 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.389 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.389 02:53:30 -- nvmf/common.sh@163 -- # true 00:08:51.389 02:53:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.389 02:53:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.389 02:53:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.389 02:53:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.389 02:53:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.389 02:53:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.389 02:53:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.389 02:53:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:51.389 02:53:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:51.389 02:53:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:51.389 02:53:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:51.389 02:53:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:51.389 02:53:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:51.389 02:53:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.389 02:53:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.649 02:53:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.649 02:53:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:51.649 02:53:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:51.649 02:53:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.649 02:53:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.649 02:53:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.649 02:53:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.649 02:53:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.649 02:53:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:51.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:08:51.649 00:08:51.649 --- 10.0.0.2 ping statistics --- 00:08:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.649 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:08:51.649 02:53:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:51.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:08:51.649 00:08:51.649 --- 10.0.0.3 ping statistics --- 00:08:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.649 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:51.649 02:53:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:08:51.649 00:08:51.649 --- 10.0.0.1 ping statistics --- 00:08:51.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.649 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:08:51.649 02:53:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.649 02:53:30 -- nvmf/common.sh@422 -- # return 0 00:08:51.649 02:53:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:51.649 02:53:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.649 02:53:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:51.649 02:53:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:51.649 02:53:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.649 02:53:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:51.649 02:53:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:51.649 02:53:30 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:08:51.649 02:53:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:51.649 02:53:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:51.649 02:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 ************************************ 00:08:51.910 START TEST nvmf_host_management 00:08:51.910 ************************************ 00:08:51.910 02:53:30 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:08:51.910 02:53:30 -- target/host_management.sh@69 -- # starttarget 00:08:51.910 02:53:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:51.910 02:53:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:51.910 02:53:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:51.910 02:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 02:53:30 -- nvmf/common.sh@470 -- # nvmfpid=78544 00:08:51.910 02:53:30 -- nvmf/common.sh@471 -- # waitforlisten 78544 00:08:51.910 02:53:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:51.910 02:53:30 -- common/autotest_common.sh@817 -- # '[' -z 78544 ']' 00:08:51.910 02:53:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.910 02:53:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:51.910 02:53:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.910 02:53:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:51.910 02:53:30 -- common/autotest_common.sh@10 -- # set +x 00:08:51.910 [2024-04-23 02:53:30.876684] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:51.910 [2024-04-23 02:53:30.876781] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.910 [2024-04-23 02:53:31.001677] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:51.910 [2024-04-23 02:53:31.019781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.910 [2024-04-23 02:53:31.054544] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.910 [2024-04-23 02:53:31.054736] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.910 [2024-04-23 02:53:31.054802] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.910 [2024-04-23 02:53:31.054851] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.910 [2024-04-23 02:53:31.055125] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.910 [2024-04-23 02:53:31.055786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.910 [2024-04-23 02:53:31.055957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.910 [2024-04-23 02:53:31.056205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:51.910 [2024-04-23 02:53:31.056212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.170 02:53:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.170 02:53:31 -- common/autotest_common.sh@850 -- # return 0 00:08:52.170 02:53:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:52.170 02:53:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:52.170 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.170 02:53:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.170 02:53:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.170 02:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.170 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.170 [2024-04-23 02:53:31.178366] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.170 02:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.170 02:53:31 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:52.170 02:53:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:52.170 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.170 02:53:31 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:52.170 02:53:31 -- target/host_management.sh@23 -- # cat 00:08:52.170 02:53:31 -- target/host_management.sh@30 -- # rpc_cmd 00:08:52.170 02:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.170 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.170 Malloc0 00:08:52.170 [2024-04-23 02:53:31.248652] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.170 02:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.170 02:53:31 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:52.170 02:53:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:52.170 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:52.170 02:53:31 -- target/host_management.sh@73 -- # perfpid=78591 00:08:52.170 02:53:31 -- target/host_management.sh@74 -- # waitforlisten 78591 /var/tmp/bdevperf.sock 00:08:52.170 02:53:31 -- common/autotest_common.sh@817 -- # '[' -z 78591 ']' 00:08:52.170 02:53:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:52.171 02:53:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:52.171 02:53:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:52.171 02:53:31 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:52.171 02:53:31 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:52.171 02:53:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:52.171 02:53:31 -- nvmf/common.sh@521 -- # config=() 00:08:52.171 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.171 02:53:31 -- nvmf/common.sh@521 -- # local subsystem config 00:08:52.171 02:53:31 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:52.171 02:53:31 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:52.171 { 00:08:52.171 "params": { 00:08:52.171 "name": "Nvme$subsystem", 00:08:52.171 "trtype": "$TEST_TRANSPORT", 00:08:52.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.171 "adrfam": "ipv4", 00:08:52.171 "trsvcid": "$NVMF_PORT", 00:08:52.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.171 "hdgst": ${hdgst:-false}, 00:08:52.171 "ddgst": ${ddgst:-false} 00:08:52.171 }, 00:08:52.171 "method": "bdev_nvme_attach_controller" 00:08:52.171 } 00:08:52.171 EOF 00:08:52.171 )") 00:08:52.171 02:53:31 -- nvmf/common.sh@543 -- # cat 00:08:52.171 02:53:31 -- nvmf/common.sh@545 -- # jq . 00:08:52.171 02:53:31 -- nvmf/common.sh@546 -- # IFS=, 00:08:52.171 02:53:31 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:52.171 "params": { 00:08:52.171 "name": "Nvme0", 00:08:52.171 "trtype": "tcp", 00:08:52.171 "traddr": "10.0.0.2", 00:08:52.171 "adrfam": "ipv4", 00:08:52.171 "trsvcid": "4420", 00:08:52.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:52.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:52.171 "hdgst": false, 00:08:52.171 "ddgst": false 00:08:52.171 }, 00:08:52.171 "method": "bdev_nvme_attach_controller" 00:08:52.171 }' 00:08:52.430 [2024-04-23 02:53:31.346332] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:52.430 [2024-04-23 02:53:31.346432] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78591 ] 00:08:52.430 [2024-04-23 02:53:31.468368] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:52.430 [2024-04-23 02:53:31.487591] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.430 [2024-04-23 02:53:31.526545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.688 Running I/O for 10 seconds... 00:08:52.688 02:53:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:52.688 02:53:31 -- common/autotest_common.sh@850 -- # return 0 00:08:52.688 02:53:31 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:52.688 02:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.688 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.688 02:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.688 02:53:31 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:52.688 02:53:31 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:52.689 02:53:31 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:52.689 02:53:31 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:52.689 02:53:31 -- target/host_management.sh@52 -- # local ret=1 00:08:52.689 02:53:31 -- target/host_management.sh@53 -- # local i 00:08:52.689 02:53:31 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:52.689 02:53:31 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:52.689 02:53:31 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:52.689 02:53:31 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:52.689 02:53:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.689 02:53:31 -- common/autotest_common.sh@10 -- # set +x 00:08:52.689 02:53:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:52.689 02:53:31 -- target/host_management.sh@55 -- # read_io_count=67 00:08:52.689 02:53:31 -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:52.689 02:53:31 -- target/host_management.sh@62 -- # sleep 0.25 00:08:52.946 02:53:32 -- target/host_management.sh@54 -- # (( i-- )) 00:08:52.946 02:53:32 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:52.946 02:53:32 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:52.946 02:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:52.946 02:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:52.946 02:53:32 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:52.946 02:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.206 02:53:32 -- target/host_management.sh@55 -- # read_io_count=515 00:08:53.207 02:53:32 -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:08:53.207 02:53:32 -- target/host_management.sh@59 -- # ret=0 00:08:53.207 02:53:32 -- target/host_management.sh@60 -- # break 00:08:53.207 02:53:32 -- target/host_management.sh@64 -- # return 0 00:08:53.207 02:53:32 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.207 02:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:53.207 02:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:53.207 02:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.207 02:53:32 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:53.207 02:53:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:08:53.207 02:53:32 -- common/autotest_common.sh@10 -- # set +x 00:08:53.207 02:53:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:08:53.207 02:53:32 -- target/host_management.sh@87 -- # sleep 1 00:08:53.207 [2024-04-23 02:53:32.132632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.132984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.132995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.207 [2024-04-23 02:53:32.133481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.207 [2024-04-23 02:53:32.133491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.133979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.133989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:53.208 [2024-04-23 02:53:32.134122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.134132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa081c0 is same with the state(5) to be set 00:08:53.208 [2024-04-23 02:53:32.134551] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa081c0 was disconnected and freed. reset controller. 00:08:53.208 [2024-04-23 02:53:32.134876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.208 [2024-04-23 02:53:32.135018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.135142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.208 [2024-04-23 02:53:32.135214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.135362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.208 [2024-04-23 02:53:32.135439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.135539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:08:53.208 [2024-04-23 02:53:32.135596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:53.208 [2024-04-23 02:53:32.135647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x56f3a0 is same with the state(5) to be set 00:08:53.208 [2024-04-23 02:53:32.136914] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:53.208 task offset: 81920 on job bdev=Nvme0n1 fails 00:08:53.208 00:08:53.208 Latency(us) 00:08:53.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.208 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:53.208 Job: Nvme0n1 ended in about 0.47 seconds with error 00:08:53.208 Verification LBA range: start 0x0 length 0x400 00:08:53.208 Nvme0n1 : 0.47 1355.47 84.72 135.55 0.00 41308.97 3604.48 45041.11 00:08:53.208 =================================================================================================================== 00:08:53.208 Total : 1355.47 84.72 135.55 0.00 41308.97 3604.48 45041.11 00:08:53.208 [2024-04-23 02:53:32.139196] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:53.208 [2024-04-23 02:53:32.139323] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56f3a0 (9): Bad file descriptor 00:08:53.208 [2024-04-23 02:53:32.151047] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:54.162 02:53:33 -- target/host_management.sh@91 -- # kill -9 78591 00:08:54.162 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (78591) - No such process 00:08:54.163 02:53:33 -- target/host_management.sh@91 -- # true 00:08:54.163 02:53:33 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:54.163 02:53:33 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:54.163 02:53:33 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:54.163 02:53:33 -- nvmf/common.sh@521 -- # config=() 00:08:54.163 02:53:33 -- nvmf/common.sh@521 -- # local subsystem config 00:08:54.163 02:53:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:08:54.163 02:53:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:08:54.163 { 00:08:54.163 "params": { 00:08:54.163 "name": "Nvme$subsystem", 00:08:54.163 "trtype": "$TEST_TRANSPORT", 00:08:54.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:54.163 "adrfam": "ipv4", 00:08:54.163 "trsvcid": "$NVMF_PORT", 00:08:54.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:54.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:54.163 "hdgst": ${hdgst:-false}, 00:08:54.163 "ddgst": ${ddgst:-false} 00:08:54.163 }, 00:08:54.163 "method": "bdev_nvme_attach_controller" 00:08:54.163 } 00:08:54.163 EOF 00:08:54.163 )") 00:08:54.163 02:53:33 -- nvmf/common.sh@543 -- # cat 00:08:54.163 02:53:33 -- nvmf/common.sh@545 -- # jq . 00:08:54.163 02:53:33 -- nvmf/common.sh@546 -- # IFS=, 00:08:54.163 02:53:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:08:54.163 "params": { 00:08:54.163 "name": "Nvme0", 00:08:54.163 "trtype": "tcp", 00:08:54.163 "traddr": "10.0.0.2", 00:08:54.163 "adrfam": "ipv4", 00:08:54.163 "trsvcid": "4420", 00:08:54.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:54.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:54.163 "hdgst": false, 00:08:54.163 "ddgst": false 00:08:54.163 }, 00:08:54.163 "method": "bdev_nvme_attach_controller" 00:08:54.163 }' 00:08:54.163 [2024-04-23 02:53:33.187748] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:54.163 [2024-04-23 02:53:33.188598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78625 ] 00:08:54.163 [2024-04-23 02:53:33.311343] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:54.421 [2024-04-23 02:53:33.331154] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.421 [2024-04-23 02:53:33.371987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.421 Running I/O for 1 seconds... 00:08:55.799 00:08:55.799 Latency(us) 00:08:55.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:55.799 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:55.799 Verification LBA range: start 0x0 length 0x400 00:08:55.799 Nvme0n1 : 1.04 1477.80 92.36 0.00 0.00 42355.54 4110.89 44087.85 00:08:55.799 =================================================================================================================== 00:08:55.799 Total : 1477.80 92.36 0.00 0.00 42355.54 4110.89 44087.85 00:08:55.799 02:53:34 -- target/host_management.sh@102 -- # stoptarget 00:08:55.799 02:53:34 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:55.799 02:53:34 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:55.799 02:53:34 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:55.799 02:53:34 -- target/host_management.sh@40 -- # nvmftestfini 00:08:55.799 02:53:34 -- nvmf/common.sh@477 -- # nvmfcleanup 00:08:55.799 02:53:34 -- nvmf/common.sh@117 -- # sync 00:08:55.799 02:53:34 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.799 02:53:34 -- nvmf/common.sh@120 -- # set +e 00:08:55.799 02:53:34 -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.799 02:53:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.799 rmmod nvme_tcp 00:08:55.799 rmmod nvme_fabrics 00:08:55.799 rmmod nvme_keyring 00:08:55.799 02:53:34 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.799 02:53:34 -- nvmf/common.sh@124 -- # set -e 00:08:55.799 02:53:34 -- nvmf/common.sh@125 -- # return 0 00:08:55.799 02:53:34 -- nvmf/common.sh@478 -- # '[' -n 78544 ']' 00:08:55.799 02:53:34 -- nvmf/common.sh@479 -- # killprocess 78544 00:08:55.799 02:53:34 -- common/autotest_common.sh@936 -- # '[' -z 78544 ']' 00:08:55.799 02:53:34 -- common/autotest_common.sh@940 -- # kill -0 78544 00:08:55.799 02:53:34 -- common/autotest_common.sh@941 -- # uname 00:08:55.799 02:53:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:55.799 02:53:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78544 00:08:55.799 killing process with pid 78544 00:08:55.799 02:53:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:08:55.799 02:53:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:08:55.799 02:53:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78544' 00:08:55.799 02:53:34 -- common/autotest_common.sh@955 -- # kill 78544 00:08:55.799 02:53:34 -- common/autotest_common.sh@960 -- # wait 78544 00:08:56.058 [2024-04-23 02:53:35.016749] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:56.058 02:53:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:08:56.058 02:53:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:08:56.058 02:53:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:08:56.058 02:53:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:56.058 02:53:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:56.058 02:53:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.058 02:53:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.058 02:53:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.058 02:53:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:08:56.058 00:08:56.058 real 0m4.269s 00:08:56.058 user 0m17.852s 00:08:56.058 sys 0m1.052s 00:08:56.058 02:53:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.058 02:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.058 ************************************ 00:08:56.058 END TEST nvmf_host_management 00:08:56.058 ************************************ 00:08:56.058 02:53:35 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:56.058 ************************************ 00:08:56.058 END TEST nvmf_host_management 00:08:56.058 ************************************ 00:08:56.058 00:08:56.058 real 0m4.911s 00:08:56.058 user 0m18.007s 00:08:56.058 sys 0m1.311s 00:08:56.058 02:53:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.058 02:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.058 02:53:35 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:56.058 02:53:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:56.058 02:53:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.058 02:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.318 ************************************ 00:08:56.318 START TEST nvmf_lvol 00:08:56.318 ************************************ 00:08:56.318 02:53:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:56.318 * Looking for test storage... 00:08:56.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.318 02:53:35 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.318 02:53:35 -- nvmf/common.sh@7 -- # uname -s 00:08:56.318 02:53:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.318 02:53:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.318 02:53:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.318 02:53:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.318 02:53:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.318 02:53:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.318 02:53:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.318 02:53:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.318 02:53:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.318 02:53:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.318 02:53:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:56.318 02:53:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:08:56.318 02:53:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.318 02:53:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.318 02:53:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.318 02:53:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.318 02:53:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.318 02:53:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.318 02:53:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.318 02:53:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.318 02:53:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.318 02:53:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.319 02:53:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.319 02:53:35 -- paths/export.sh@5 -- # export PATH 00:08:56.319 02:53:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.319 02:53:35 -- nvmf/common.sh@47 -- # : 0 00:08:56.319 02:53:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:56.319 02:53:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:56.319 02:53:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.319 02:53:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.319 02:53:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.319 02:53:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:56.319 02:53:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:56.319 02:53:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.319 02:53:35 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:56.319 02:53:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:08:56.319 02:53:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.319 02:53:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:08:56.319 02:53:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:08:56.319 02:53:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:08:56.319 02:53:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.319 02:53:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:56.319 02:53:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.319 02:53:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:08:56.319 02:53:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:08:56.319 02:53:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:08:56.319 02:53:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:08:56.319 02:53:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:08:56.319 02:53:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:08:56.319 02:53:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.319 02:53:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.319 02:53:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:56.319 02:53:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:08:56.319 02:53:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.319 02:53:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.319 02:53:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.319 02:53:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.319 02:53:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.319 02:53:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.319 02:53:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.319 02:53:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.319 02:53:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:08:56.319 02:53:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:08:56.319 Cannot find device "nvmf_tgt_br" 00:08:56.319 02:53:35 -- nvmf/common.sh@155 -- # true 00:08:56.319 02:53:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.319 Cannot find device "nvmf_tgt_br2" 00:08:56.319 02:53:35 -- nvmf/common.sh@156 -- # true 00:08:56.319 02:53:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:08:56.319 02:53:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:08:56.319 Cannot find device "nvmf_tgt_br" 00:08:56.319 02:53:35 -- nvmf/common.sh@158 -- # true 00:08:56.319 02:53:35 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:08:56.319 Cannot find device "nvmf_tgt_br2" 00:08:56.319 02:53:35 -- nvmf/common.sh@159 -- # true 00:08:56.319 02:53:35 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:08:56.319 02:53:35 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:08:56.578 02:53:35 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.578 02:53:35 -- nvmf/common.sh@162 -- # true 00:08:56.578 02:53:35 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.578 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.578 02:53:35 -- nvmf/common.sh@163 -- # true 00:08:56.578 02:53:35 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.578 02:53:35 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.578 02:53:35 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.578 02:53:35 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.578 02:53:35 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.578 02:53:35 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.579 02:53:35 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.579 02:53:35 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:56.579 02:53:35 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:56.579 02:53:35 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:08:56.579 02:53:35 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:08:56.579 02:53:35 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:08:56.579 02:53:35 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:08:56.579 02:53:35 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.579 02:53:35 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.579 02:53:35 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.579 02:53:35 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:08:56.579 02:53:35 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:08:56.579 02:53:35 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.579 02:53:35 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.579 02:53:35 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:56.579 02:53:35 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:56.579 02:53:35 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:56.579 02:53:35 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:08:56.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:56.579 00:08:56.579 --- 10.0.0.2 ping statistics --- 00:08:56.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.579 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:56.579 02:53:35 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:08:56.579 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:56.579 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:08:56.579 00:08:56.579 --- 10.0.0.3 ping statistics --- 00:08:56.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.579 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:08:56.579 02:53:35 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:56.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:08:56.579 00:08:56.579 --- 10.0.0.1 ping statistics --- 00:08:56.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.579 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:08:56.579 02:53:35 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.579 02:53:35 -- nvmf/common.sh@422 -- # return 0 00:08:56.579 02:53:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:08:56.579 02:53:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.579 02:53:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:08:56.579 02:53:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:08:56.579 02:53:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.579 02:53:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:08:56.579 02:53:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:08:56.579 02:53:35 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:56.579 02:53:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:08:56.579 02:53:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:08:56.579 02:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.579 02:53:35 -- nvmf/common.sh@470 -- # nvmfpid=78863 00:08:56.579 02:53:35 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:56.579 02:53:35 -- nvmf/common.sh@471 -- # waitforlisten 78863 00:08:56.579 02:53:35 -- common/autotest_common.sh@817 -- # '[' -z 78863 ']' 00:08:56.579 02:53:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.579 02:53:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:08:56.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.579 02:53:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.579 02:53:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:08:56.579 02:53:35 -- common/autotest_common.sh@10 -- # set +x 00:08:56.838 [2024-04-23 02:53:35.767023] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:08:56.838 [2024-04-23 02:53:35.767141] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.838 [2024-04-23 02:53:35.891162] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:08:56.838 [2024-04-23 02:53:35.909287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:56.838 [2024-04-23 02:53:35.950913] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.838 [2024-04-23 02:53:35.950987] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.838 [2024-04-23 02:53:35.951012] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.838 [2024-04-23 02:53:35.951023] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.838 [2024-04-23 02:53:35.951041] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.838 [2024-04-23 02:53:35.951230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.838 [2024-04-23 02:53:35.952018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.838 [2024-04-23 02:53:35.952057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.097 02:53:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:08:57.098 02:53:36 -- common/autotest_common.sh@850 -- # return 0 00:08:57.098 02:53:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:08:57.098 02:53:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:08:57.098 02:53:36 -- common/autotest_common.sh@10 -- # set +x 00:08:57.098 02:53:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.098 02:53:36 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.357 [2024-04-23 02:53:36.304110] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.357 02:53:36 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.616 02:53:36 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:57.616 02:53:36 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:57.875 02:53:36 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:57.875 02:53:36 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:58.134 02:53:37 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:58.393 02:53:37 -- target/nvmf_lvol.sh@29 -- # lvs=748a2593-9dd4-464b-b078-57f972fa7bb2 00:08:58.393 02:53:37 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 748a2593-9dd4-464b-b078-57f972fa7bb2 lvol 20 00:08:58.652 02:53:37 -- target/nvmf_lvol.sh@32 -- # lvol=0b2d85c1-6a50-44fb-bb80-5d03398925fe 00:08:58.652 02:53:37 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:58.911 02:53:37 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0b2d85c1-6a50-44fb-bb80-5d03398925fe 00:08:59.169 02:53:38 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:59.169 [2024-04-23 02:53:38.326280] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.428 02:53:38 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.428 02:53:38 -- target/nvmf_lvol.sh@42 -- # perf_pid=78927 00:08:59.428 02:53:38 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:59.428 02:53:38 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:00.803 02:53:39 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 0b2d85c1-6a50-44fb-bb80-5d03398925fe MY_SNAPSHOT 00:09:00.803 02:53:39 -- target/nvmf_lvol.sh@47 -- # snapshot=a514e01d-bf92-48c9-a5ac-69408bae099c 00:09:00.803 02:53:39 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 0b2d85c1-6a50-44fb-bb80-5d03398925fe 30 00:09:01.062 02:53:40 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone a514e01d-bf92-48c9-a5ac-69408bae099c MY_CLONE 00:09:01.333 02:53:40 -- target/nvmf_lvol.sh@49 -- # clone=2c802e81-2355-4b2b-973a-2ddf1eca9bbe 00:09:01.333 02:53:40 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 2c802e81-2355-4b2b-973a-2ddf1eca9bbe 00:09:01.913 02:53:40 -- target/nvmf_lvol.sh@53 -- # wait 78927 00:09:10.023 Initializing NVMe Controllers 00:09:10.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:10.023 Controller IO queue size 128, less than required. 00:09:10.023 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:10.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:10.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:10.023 Initialization complete. Launching workers. 00:09:10.023 ======================================================== 00:09:10.023 Latency(us) 00:09:10.023 Device Information : IOPS MiB/s Average min max 00:09:10.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9614.60 37.56 13323.01 2195.09 69938.57 00:09:10.023 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9495.80 37.09 13490.02 3029.05 57094.45 00:09:10.023 ======================================================== 00:09:10.023 Total : 19110.39 74.65 13406.00 2195.09 69938.57 00:09:10.023 00:09:10.023 02:53:48 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:10.023 02:53:49 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0b2d85c1-6a50-44fb-bb80-5d03398925fe 00:09:10.282 02:53:49 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 748a2593-9dd4-464b-b078-57f972fa7bb2 00:09:10.541 02:53:49 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:10.541 02:53:49 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:10.541 02:53:49 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:10.541 02:53:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:10.541 02:53:49 -- nvmf/common.sh@117 -- # sync 00:09:10.541 02:53:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:10.541 02:53:49 -- nvmf/common.sh@120 -- # set +e 00:09:10.541 02:53:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:10.541 02:53:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:10.541 rmmod nvme_tcp 00:09:10.541 rmmod nvme_fabrics 00:09:10.541 rmmod nvme_keyring 00:09:10.541 02:53:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:10.541 02:53:49 -- nvmf/common.sh@124 -- # set -e 00:09:10.541 02:53:49 -- nvmf/common.sh@125 -- # return 0 00:09:10.541 02:53:49 -- nvmf/common.sh@478 -- # '[' -n 78863 ']' 00:09:10.541 02:53:49 -- nvmf/common.sh@479 -- # killprocess 78863 00:09:10.541 02:53:49 -- common/autotest_common.sh@936 -- # '[' -z 78863 ']' 00:09:10.541 02:53:49 -- common/autotest_common.sh@940 -- # kill -0 78863 00:09:10.541 02:53:49 -- common/autotest_common.sh@941 -- # uname 00:09:10.541 02:53:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.541 02:53:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78863 00:09:10.541 killing process with pid 78863 00:09:10.541 02:53:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:10.541 02:53:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:10.541 02:53:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78863' 00:09:10.541 02:53:49 -- common/autotest_common.sh@955 -- # kill 78863 00:09:10.541 02:53:49 -- common/autotest_common.sh@960 -- # wait 78863 00:09:10.800 02:53:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:10.800 02:53:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:10.800 02:53:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:10.800 02:53:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:10.800 02:53:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:10.801 02:53:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.801 02:53:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.801 02:53:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.801 02:53:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:10.801 00:09:10.801 real 0m14.645s 00:09:10.801 user 1m1.556s 00:09:10.801 sys 0m4.599s 00:09:10.801 02:53:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:10.801 02:53:49 -- common/autotest_common.sh@10 -- # set +x 00:09:10.801 ************************************ 00:09:10.801 END TEST nvmf_lvol 00:09:10.801 ************************************ 00:09:10.801 02:53:49 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:10.801 02:53:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:10.801 02:53:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.801 02:53:49 -- common/autotest_common.sh@10 -- # set +x 00:09:11.060 ************************************ 00:09:11.060 START TEST nvmf_lvs_grow 00:09:11.060 ************************************ 00:09:11.060 02:53:49 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:11.060 * Looking for test storage... 00:09:11.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:11.060 02:53:50 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:11.060 02:53:50 -- nvmf/common.sh@7 -- # uname -s 00:09:11.060 02:53:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.060 02:53:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.060 02:53:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.060 02:53:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.060 02:53:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.060 02:53:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.060 02:53:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.060 02:53:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.060 02:53:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.060 02:53:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:11.060 02:53:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:11.060 02:53:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.060 02:53:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.060 02:53:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:11.060 02:53:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.060 02:53:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:11.060 02:53:50 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.060 02:53:50 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.060 02:53:50 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.060 02:53:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.060 02:53:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.060 02:53:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.060 02:53:50 -- paths/export.sh@5 -- # export PATH 00:09:11.060 02:53:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.060 02:53:50 -- nvmf/common.sh@47 -- # : 0 00:09:11.060 02:53:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:11.060 02:53:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:11.060 02:53:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.060 02:53:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.060 02:53:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.060 02:53:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:11.060 02:53:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:11.060 02:53:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:11.060 02:53:50 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:11.060 02:53:50 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.060 02:53:50 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:11.060 02:53:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:11.060 02:53:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.060 02:53:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:11.060 02:53:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:11.060 02:53:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:11.060 02:53:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.060 02:53:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.060 02:53:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.060 02:53:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:11.060 02:53:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:11.060 02:53:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:11.060 02:53:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:11.060 02:53:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:11.060 02:53:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:11.060 02:53:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:11.060 02:53:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:11.060 02:53:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:11.060 02:53:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:11.060 02:53:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:11.060 02:53:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:11.060 02:53:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:11.060 02:53:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:11.060 02:53:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:11.060 02:53:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:11.060 Cannot find device "nvmf_tgt_br" 00:09:11.060 02:53:50 -- nvmf/common.sh@155 -- # true 00:09:11.060 02:53:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:11.060 Cannot find device "nvmf_tgt_br2" 00:09:11.060 02:53:50 -- nvmf/common.sh@156 -- # true 00:09:11.060 02:53:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:11.060 02:53:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:11.060 Cannot find device "nvmf_tgt_br" 00:09:11.060 02:53:50 -- nvmf/common.sh@158 -- # true 00:09:11.060 02:53:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:11.060 Cannot find device "nvmf_tgt_br2" 00:09:11.060 02:53:50 -- nvmf/common.sh@159 -- # true 00:09:11.060 02:53:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:11.320 02:53:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:11.320 02:53:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:11.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.320 02:53:50 -- nvmf/common.sh@162 -- # true 00:09:11.320 02:53:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:11.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:11.320 02:53:50 -- nvmf/common.sh@163 -- # true 00:09:11.320 02:53:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:11.320 02:53:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:11.320 02:53:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:11.320 02:53:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:11.320 02:53:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:11.320 02:53:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:11.320 02:53:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:11.320 02:53:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:11.320 02:53:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:11.320 02:53:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:11.320 02:53:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:11.320 02:53:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:11.320 02:53:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:11.320 02:53:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:11.320 02:53:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:11.320 02:53:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:11.320 02:53:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:11.320 02:53:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:11.320 02:53:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:11.320 02:53:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:11.320 02:53:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:11.320 02:53:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:11.320 02:53:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:11.320 02:53:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:11.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:11.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:09:11.320 00:09:11.320 --- 10.0.0.2 ping statistics --- 00:09:11.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.320 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:09:11.320 02:53:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:11.320 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:11.320 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:09:11.320 00:09:11.320 --- 10.0.0.3 ping statistics --- 00:09:11.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.320 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:09:11.320 02:53:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:11.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:11.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:09:11.320 00:09:11.320 --- 10.0.0.1 ping statistics --- 00:09:11.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:11.320 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:09:11.321 02:53:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:11.321 02:53:50 -- nvmf/common.sh@422 -- # return 0 00:09:11.321 02:53:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:11.321 02:53:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:11.321 02:53:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:11.321 02:53:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:11.321 02:53:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:11.321 02:53:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:11.321 02:53:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:11.321 02:53:50 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:11.321 02:53:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:11.321 02:53:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:11.321 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:11.321 02:53:50 -- nvmf/common.sh@470 -- # nvmfpid=79246 00:09:11.321 02:53:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:11.321 02:53:50 -- nvmf/common.sh@471 -- # waitforlisten 79246 00:09:11.321 02:53:50 -- common/autotest_common.sh@817 -- # '[' -z 79246 ']' 00:09:11.321 02:53:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.321 02:53:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:11.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.321 02:53:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.321 02:53:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:11.321 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:11.580 [2024-04-23 02:53:50.503469] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:11.580 [2024-04-23 02:53:50.503578] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:11.580 [2024-04-23 02:53:50.625739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:11.580 [2024-04-23 02:53:50.643829] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.580 [2024-04-23 02:53:50.683349] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:11.580 [2024-04-23 02:53:50.683422] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:11.580 [2024-04-23 02:53:50.683446] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:11.580 [2024-04-23 02:53:50.683456] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:11.580 [2024-04-23 02:53:50.683465] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:11.580 [2024-04-23 02:53:50.683503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.839 02:53:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:11.839 02:53:50 -- common/autotest_common.sh@850 -- # return 0 00:09:11.839 02:53:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:11.839 02:53:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:11.839 02:53:50 -- common/autotest_common.sh@10 -- # set +x 00:09:11.839 02:53:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:11.839 02:53:50 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:12.098 [2024-04-23 02:53:51.043795] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:12.098 02:53:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:12.098 02:53:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:12.098 02:53:51 -- common/autotest_common.sh@10 -- # set +x 00:09:12.098 ************************************ 00:09:12.098 START TEST lvs_grow_clean 00:09:12.098 ************************************ 00:09:12.098 02:53:51 -- common/autotest_common.sh@1111 -- # lvs_grow 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:12.098 02:53:51 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:12.357 02:53:51 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:12.357 02:53:51 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:12.615 02:53:51 -- target/nvmf_lvs_grow.sh@28 -- # lvs=8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:12.615 02:53:51 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:12.615 02:53:51 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:12.874 02:53:51 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:12.874 02:53:51 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:12.874 02:53:51 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea lvol 150 00:09:13.133 02:53:52 -- target/nvmf_lvs_grow.sh@33 -- # lvol=efbd25a2-3a2b-4599-88fa-03c0dbb088b6 00:09:13.133 02:53:52 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:13.133 02:53:52 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:13.393 [2024-04-23 02:53:52.384778] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:13.393 [2024-04-23 02:53:52.384883] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:13.393 true 00:09:13.393 02:53:52 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:13.393 02:53:52 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:13.652 02:53:52 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:13.652 02:53:52 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:13.911 02:53:52 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 efbd25a2-3a2b-4599-88fa-03c0dbb088b6 00:09:14.169 02:53:53 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:14.428 [2024-04-23 02:53:53.369251] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.428 02:53:53 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.687 02:53:53 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:14.687 02:53:53 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79325 00:09:14.687 02:53:53 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:14.687 02:53:53 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79325 /var/tmp/bdevperf.sock 00:09:14.687 02:53:53 -- common/autotest_common.sh@817 -- # '[' -z 79325 ']' 00:09:14.687 02:53:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:14.687 02:53:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:14.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:14.687 02:53:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:14.687 02:53:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:14.687 02:53:53 -- common/autotest_common.sh@10 -- # set +x 00:09:14.687 [2024-04-23 02:53:53.622560] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:14.687 [2024-04-23 02:53:53.622658] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79325 ] 00:09:14.687 [2024-04-23 02:53:53.738728] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:14.687 [2024-04-23 02:53:53.760374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.687 [2024-04-23 02:53:53.800377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.946 02:53:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:14.946 02:53:53 -- common/autotest_common.sh@850 -- # return 0 00:09:14.946 02:53:53 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:15.205 Nvme0n1 00:09:15.205 02:53:54 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:15.464 [ 00:09:15.464 { 00:09:15.464 "name": "Nvme0n1", 00:09:15.464 "aliases": [ 00:09:15.464 "efbd25a2-3a2b-4599-88fa-03c0dbb088b6" 00:09:15.464 ], 00:09:15.464 "product_name": "NVMe disk", 00:09:15.464 "block_size": 4096, 00:09:15.464 "num_blocks": 38912, 00:09:15.464 "uuid": "efbd25a2-3a2b-4599-88fa-03c0dbb088b6", 00:09:15.464 "assigned_rate_limits": { 00:09:15.464 "rw_ios_per_sec": 0, 00:09:15.464 "rw_mbytes_per_sec": 0, 00:09:15.464 "r_mbytes_per_sec": 0, 00:09:15.464 "w_mbytes_per_sec": 0 00:09:15.464 }, 00:09:15.464 "claimed": false, 00:09:15.464 "zoned": false, 00:09:15.464 "supported_io_types": { 00:09:15.464 "read": true, 00:09:15.464 "write": true, 00:09:15.464 "unmap": true, 00:09:15.464 "write_zeroes": true, 00:09:15.464 "flush": true, 00:09:15.464 "reset": true, 00:09:15.464 "compare": true, 00:09:15.464 "compare_and_write": true, 00:09:15.464 "abort": true, 00:09:15.464 "nvme_admin": true, 00:09:15.464 "nvme_io": true 00:09:15.464 }, 00:09:15.464 "memory_domains": [ 00:09:15.464 { 00:09:15.464 "dma_device_id": "system", 00:09:15.464 "dma_device_type": 1 00:09:15.464 } 00:09:15.464 ], 00:09:15.464 "driver_specific": { 00:09:15.464 "nvme": [ 00:09:15.464 { 00:09:15.464 "trid": { 00:09:15.464 "trtype": "TCP", 00:09:15.464 "adrfam": "IPv4", 00:09:15.464 "traddr": "10.0.0.2", 00:09:15.464 "trsvcid": "4420", 00:09:15.464 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:15.464 }, 00:09:15.464 "ctrlr_data": { 00:09:15.464 "cntlid": 1, 00:09:15.464 "vendor_id": "0x8086", 00:09:15.464 "model_number": "SPDK bdev Controller", 00:09:15.464 "serial_number": "SPDK0", 00:09:15.464 "firmware_revision": "24.05", 00:09:15.464 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:15.464 "oacs": { 00:09:15.464 "security": 0, 00:09:15.464 "format": 0, 00:09:15.464 "firmware": 0, 00:09:15.464 "ns_manage": 0 00:09:15.464 }, 00:09:15.464 "multi_ctrlr": true, 00:09:15.464 "ana_reporting": false 00:09:15.464 }, 00:09:15.464 "vs": { 00:09:15.464 "nvme_version": "1.3" 00:09:15.464 }, 00:09:15.464 "ns_data": { 00:09:15.464 "id": 1, 00:09:15.464 "can_share": true 00:09:15.464 } 00:09:15.464 } 00:09:15.464 ], 00:09:15.464 "mp_policy": "active_passive" 00:09:15.464 } 00:09:15.464 } 00:09:15.464 ] 00:09:15.464 02:53:54 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79341 00:09:15.464 02:53:54 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:15.464 02:53:54 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:15.464 Running I/O for 10 seconds... 00:09:16.842 Latency(us) 00:09:16.842 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.842 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.842 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:16.842 =================================================================================================================== 00:09:16.842 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:16.842 00:09:17.411 02:53:56 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:17.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.669 Nvme0n1 : 2.00 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:17.669 =================================================================================================================== 00:09:17.669 Total : 6794.50 26.54 0.00 0.00 0.00 0.00 0.00 00:09:17.669 00:09:17.669 true 00:09:17.669 02:53:56 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:17.669 02:53:56 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:17.928 02:53:57 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:17.928 02:53:57 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:17.928 02:53:57 -- target/nvmf_lvs_grow.sh@65 -- # wait 79341 00:09:18.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.496 Nvme0n1 : 3.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:18.496 =================================================================================================================== 00:09:18.496 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:18.496 00:09:19.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.873 Nvme0n1 : 4.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:19.873 =================================================================================================================== 00:09:19.873 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:19.873 00:09:20.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.811 Nvme0n1 : 5.00 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:20.811 =================================================================================================================== 00:09:20.811 Total : 6807.20 26.59 0.00 0.00 0.00 0.00 0.00 00:09:20.811 00:09:21.759 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.759 Nvme0n1 : 6.00 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:21.759 =================================================================================================================== 00:09:21.759 Total : 6815.67 26.62 0.00 0.00 0.00 0.00 0.00 00:09:21.759 00:09:22.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.704 Nvme0n1 : 7.00 6839.86 26.72 0.00 0.00 0.00 0.00 0.00 00:09:22.704 =================================================================================================================== 00:09:22.704 Total : 6839.86 26.72 0.00 0.00 0.00 0.00 0.00 00:09:22.704 00:09:23.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.639 Nvme0n1 : 8.00 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:09:23.639 =================================================================================================================== 00:09:23.639 Total : 6826.25 26.67 0.00 0.00 0.00 0.00 0.00 00:09:23.639 00:09:24.577 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.577 Nvme0n1 : 9.00 6843.89 26.73 0.00 0.00 0.00 0.00 0.00 00:09:24.577 =================================================================================================================== 00:09:24.577 Total : 6843.89 26.73 0.00 0.00 0.00 0.00 0.00 00:09:24.577 00:09:25.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.513 Nvme0n1 : 10.00 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:09:25.513 =================================================================================================================== 00:09:25.513 Total : 6832.60 26.69 0.00 0.00 0.00 0.00 0.00 00:09:25.513 00:09:25.513 00:09:25.513 Latency(us) 00:09:25.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.513 Nvme0n1 : 10.01 6839.73 26.72 0.00 0.00 18709.68 15490.33 45756.04 00:09:25.513 =================================================================================================================== 00:09:25.513 Total : 6839.73 26.72 0.00 0.00 18709.68 15490.33 45756.04 00:09:25.513 0 00:09:25.513 02:54:04 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79325 00:09:25.513 02:54:04 -- common/autotest_common.sh@936 -- # '[' -z 79325 ']' 00:09:25.513 02:54:04 -- common/autotest_common.sh@940 -- # kill -0 79325 00:09:25.513 02:54:04 -- common/autotest_common.sh@941 -- # uname 00:09:25.513 02:54:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:25.513 02:54:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79325 00:09:25.513 02:54:04 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:25.513 02:54:04 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:25.513 killing process with pid 79325 00:09:25.513 02:54:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79325' 00:09:25.513 Received shutdown signal, test time was about 10.000000 seconds 00:09:25.513 00:09:25.513 Latency(us) 00:09:25.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:25.513 =================================================================================================================== 00:09:25.513 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:25.513 02:54:04 -- common/autotest_common.sh@955 -- # kill 79325 00:09:25.513 02:54:04 -- common/autotest_common.sh@960 -- # wait 79325 00:09:25.772 02:54:04 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:26.030 02:54:05 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:26.030 02:54:05 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:26.289 02:54:05 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:26.289 02:54:05 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:26.289 02:54:05 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:26.547 [2024-04-23 02:54:05.556759] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:26.547 02:54:05 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:26.547 02:54:05 -- common/autotest_common.sh@638 -- # local es=0 00:09:26.547 02:54:05 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:26.547 02:54:05 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.547 02:54:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:26.547 02:54:05 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.547 02:54:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:26.547 02:54:05 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.547 02:54:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:26.547 02:54:05 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:26.547 02:54:05 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:26.547 02:54:05 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:26.806 request: 00:09:26.806 { 00:09:26.806 "uuid": "8fdf41d9-3c64-49e2-a801-e484df27b5ea", 00:09:26.806 "method": "bdev_lvol_get_lvstores", 00:09:26.806 "req_id": 1 00:09:26.806 } 00:09:26.806 Got JSON-RPC error response 00:09:26.806 response: 00:09:26.806 { 00:09:26.806 "code": -19, 00:09:26.806 "message": "No such device" 00:09:26.806 } 00:09:26.806 02:54:05 -- common/autotest_common.sh@641 -- # es=1 00:09:26.806 02:54:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:26.806 02:54:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:26.806 02:54:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:26.806 02:54:05 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:27.065 aio_bdev 00:09:27.065 02:54:06 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev efbd25a2-3a2b-4599-88fa-03c0dbb088b6 00:09:27.065 02:54:06 -- common/autotest_common.sh@885 -- # local bdev_name=efbd25a2-3a2b-4599-88fa-03c0dbb088b6 00:09:27.065 02:54:06 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:27.065 02:54:06 -- common/autotest_common.sh@887 -- # local i 00:09:27.065 02:54:06 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:27.065 02:54:06 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:27.065 02:54:06 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:27.323 02:54:06 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b efbd25a2-3a2b-4599-88fa-03c0dbb088b6 -t 2000 00:09:27.583 [ 00:09:27.583 { 00:09:27.583 "name": "efbd25a2-3a2b-4599-88fa-03c0dbb088b6", 00:09:27.583 "aliases": [ 00:09:27.583 "lvs/lvol" 00:09:27.583 ], 00:09:27.583 "product_name": "Logical Volume", 00:09:27.583 "block_size": 4096, 00:09:27.583 "num_blocks": 38912, 00:09:27.583 "uuid": "efbd25a2-3a2b-4599-88fa-03c0dbb088b6", 00:09:27.583 "assigned_rate_limits": { 00:09:27.583 "rw_ios_per_sec": 0, 00:09:27.583 "rw_mbytes_per_sec": 0, 00:09:27.583 "r_mbytes_per_sec": 0, 00:09:27.583 "w_mbytes_per_sec": 0 00:09:27.583 }, 00:09:27.583 "claimed": false, 00:09:27.583 "zoned": false, 00:09:27.583 "supported_io_types": { 00:09:27.583 "read": true, 00:09:27.583 "write": true, 00:09:27.583 "unmap": true, 00:09:27.583 "write_zeroes": true, 00:09:27.583 "flush": false, 00:09:27.583 "reset": true, 00:09:27.583 "compare": false, 00:09:27.583 "compare_and_write": false, 00:09:27.583 "abort": false, 00:09:27.583 "nvme_admin": false, 00:09:27.583 "nvme_io": false 00:09:27.583 }, 00:09:27.583 "driver_specific": { 00:09:27.583 "lvol": { 00:09:27.583 "lvol_store_uuid": "8fdf41d9-3c64-49e2-a801-e484df27b5ea", 00:09:27.583 "base_bdev": "aio_bdev", 00:09:27.583 "thin_provision": false, 00:09:27.583 "snapshot": false, 00:09:27.583 "clone": false, 00:09:27.583 "esnap_clone": false 00:09:27.583 } 00:09:27.583 } 00:09:27.583 } 00:09:27.583 ] 00:09:27.583 02:54:06 -- common/autotest_common.sh@893 -- # return 0 00:09:27.583 02:54:06 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:27.583 02:54:06 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:27.842 02:54:06 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:27.842 02:54:06 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:27.842 02:54:06 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:28.101 02:54:07 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:28.101 02:54:07 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete efbd25a2-3a2b-4599-88fa-03c0dbb088b6 00:09:28.360 02:54:07 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8fdf41d9-3c64-49e2-a801-e484df27b5ea 00:09:28.619 02:54:07 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:28.619 02:54:07 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.210 ************************************ 00:09:29.210 END TEST lvs_grow_clean 00:09:29.210 ************************************ 00:09:29.210 00:09:29.210 real 0m16.925s 00:09:29.210 user 0m15.939s 00:09:29.210 sys 0m2.236s 00:09:29.210 02:54:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:29.210 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:29.210 02:54:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:29.210 02:54:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:29.210 02:54:08 -- common/autotest_common.sh@10 -- # set +x 00:09:29.210 ************************************ 00:09:29.210 START TEST lvs_grow_dirty 00:09:29.210 ************************************ 00:09:29.210 02:54:08 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.210 02:54:08 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.469 02:54:08 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.469 02:54:08 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.728 02:54:08 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:29.728 02:54:08 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:29.728 02:54:08 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.987 02:54:08 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.987 02:54:08 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.987 02:54:08 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 lvol 150 00:09:29.987 02:54:09 -- target/nvmf_lvs_grow.sh@33 -- # lvol=1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:29.987 02:54:09 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:29.987 02:54:09 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.245 [2024-04-23 02:54:09.368913] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.245 [2024-04-23 02:54:09.369016] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.245 true 00:09:30.245 02:54:09 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:30.245 02:54:09 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.504 02:54:09 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.504 02:54:09 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:31.071 02:54:09 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:31.071 02:54:10 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:31.329 02:54:10 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.589 02:54:10 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:31.589 02:54:10 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=79585 00:09:31.589 02:54:10 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.589 02:54:10 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 79585 /var/tmp/bdevperf.sock 00:09:31.589 02:54:10 -- common/autotest_common.sh@817 -- # '[' -z 79585 ']' 00:09:31.589 02:54:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.589 02:54:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:31.589 02:54:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.589 02:54:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:31.589 02:54:10 -- common/autotest_common.sh@10 -- # set +x 00:09:31.589 [2024-04-23 02:54:10.725780] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:31.589 [2024-04-23 02:54:10.726168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79585 ] 00:09:31.848 [2024-04-23 02:54:10.860498] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:31.848 [2024-04-23 02:54:10.879990] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.848 [2024-04-23 02:54:10.919477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.784 02:54:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:32.784 02:54:11 -- common/autotest_common.sh@850 -- # return 0 00:09:32.784 02:54:11 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.784 Nvme0n1 00:09:32.784 02:54:11 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:33.043 [ 00:09:33.043 { 00:09:33.043 "name": "Nvme0n1", 00:09:33.043 "aliases": [ 00:09:33.043 "1ea2d57b-a537-4a91-9362-d239c955a5bd" 00:09:33.043 ], 00:09:33.043 "product_name": "NVMe disk", 00:09:33.043 "block_size": 4096, 00:09:33.043 "num_blocks": 38912, 00:09:33.043 "uuid": "1ea2d57b-a537-4a91-9362-d239c955a5bd", 00:09:33.043 "assigned_rate_limits": { 00:09:33.043 "rw_ios_per_sec": 0, 00:09:33.043 "rw_mbytes_per_sec": 0, 00:09:33.043 "r_mbytes_per_sec": 0, 00:09:33.043 "w_mbytes_per_sec": 0 00:09:33.043 }, 00:09:33.043 "claimed": false, 00:09:33.043 "zoned": false, 00:09:33.043 "supported_io_types": { 00:09:33.043 "read": true, 00:09:33.043 "write": true, 00:09:33.043 "unmap": true, 00:09:33.043 "write_zeroes": true, 00:09:33.043 "flush": true, 00:09:33.043 "reset": true, 00:09:33.043 "compare": true, 00:09:33.043 "compare_and_write": true, 00:09:33.043 "abort": true, 00:09:33.043 "nvme_admin": true, 00:09:33.043 "nvme_io": true 00:09:33.043 }, 00:09:33.043 "memory_domains": [ 00:09:33.043 { 00:09:33.043 "dma_device_id": "system", 00:09:33.043 "dma_device_type": 1 00:09:33.043 } 00:09:33.043 ], 00:09:33.043 "driver_specific": { 00:09:33.043 "nvme": [ 00:09:33.043 { 00:09:33.043 "trid": { 00:09:33.043 "trtype": "TCP", 00:09:33.043 "adrfam": "IPv4", 00:09:33.043 "traddr": "10.0.0.2", 00:09:33.043 "trsvcid": "4420", 00:09:33.043 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:33.043 }, 00:09:33.043 "ctrlr_data": { 00:09:33.043 "cntlid": 1, 00:09:33.043 "vendor_id": "0x8086", 00:09:33.043 "model_number": "SPDK bdev Controller", 00:09:33.043 "serial_number": "SPDK0", 00:09:33.043 "firmware_revision": "24.05", 00:09:33.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:33.043 "oacs": { 00:09:33.043 "security": 0, 00:09:33.043 "format": 0, 00:09:33.043 "firmware": 0, 00:09:33.043 "ns_manage": 0 00:09:33.043 }, 00:09:33.043 "multi_ctrlr": true, 00:09:33.043 "ana_reporting": false 00:09:33.043 }, 00:09:33.043 "vs": { 00:09:33.043 "nvme_version": "1.3" 00:09:33.043 }, 00:09:33.043 "ns_data": { 00:09:33.043 "id": 1, 00:09:33.043 "can_share": true 00:09:33.043 } 00:09:33.043 } 00:09:33.043 ], 00:09:33.043 "mp_policy": "active_passive" 00:09:33.043 } 00:09:33.043 } 00:09:33.043 ] 00:09:33.043 02:54:12 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=79614 00:09:33.043 02:54:12 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:33.043 02:54:12 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:33.302 Running I/O for 10 seconds... 00:09:34.238 Latency(us) 00:09:34.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:34.238 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.238 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:34.238 =================================================================================================================== 00:09:34.238 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:34.238 00:09:35.170 02:54:14 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:35.170 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.170 Nvme0n1 : 2.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:35.170 =================================================================================================================== 00:09:35.170 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:35.170 00:09:35.428 true 00:09:35.428 02:54:14 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:35.428 02:54:14 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:35.686 02:54:14 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:35.686 02:54:14 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:35.686 02:54:14 -- target/nvmf_lvs_grow.sh@65 -- # wait 79614 00:09:36.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.252 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:36.252 =================================================================================================================== 00:09:36.252 Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:36.252 00:09:37.186 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.186 Nvme0n1 : 4.00 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:37.186 =================================================================================================================== 00:09:37.186 Total : 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:37.186 00:09:38.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.121 Nvme0n1 : 5.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:38.121 =================================================================================================================== 00:09:38.121 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:38.121 00:09:39.057 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.057 Nvme0n1 : 6.00 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:09:39.057 =================================================================================================================== 00:09:39.057 Total : 6963.83 27.20 0.00 0.00 0.00 0.00 0.00 00:09:39.057 00:09:40.430 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.430 Nvme0n1 : 7.00 6966.86 27.21 0.00 0.00 0.00 0.00 0.00 00:09:40.430 =================================================================================================================== 00:09:40.430 Total : 6966.86 27.21 0.00 0.00 0.00 0.00 0.00 00:09:40.430 00:09:41.364 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.364 Nvme0n1 : 8.00 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:41.364 =================================================================================================================== 00:09:41.364 Total : 6953.25 27.16 0.00 0.00 0.00 0.00 0.00 00:09:41.364 00:09:42.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.301 Nvme0n1 : 9.00 6615.44 25.84 0.00 0.00 0.00 0.00 0.00 00:09:42.301 =================================================================================================================== 00:09:42.301 Total : 6615.44 25.84 0.00 0.00 0.00 0.00 0.00 00:09:42.301 00:09:43.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.237 Nvme0n1 : 10.00 6627.00 25.89 0.00 0.00 0.00 0.00 0.00 00:09:43.237 =================================================================================================================== 00:09:43.237 Total : 6627.00 25.89 0.00 0.00 0.00 0.00 0.00 00:09:43.237 00:09:43.237 00:09:43.237 Latency(us) 00:09:43.237 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:43.237 Nvme0n1 : 10.02 6624.93 25.88 0.00 0.00 19316.42 15252.01 438495.42 00:09:43.237 =================================================================================================================== 00:09:43.237 Total : 6624.93 25.88 0.00 0.00 19316.42 15252.01 438495.42 00:09:43.237 0 00:09:43.237 02:54:22 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 79585 00:09:43.237 02:54:22 -- common/autotest_common.sh@936 -- # '[' -z 79585 ']' 00:09:43.237 02:54:22 -- common/autotest_common.sh@940 -- # kill -0 79585 00:09:43.237 02:54:22 -- common/autotest_common.sh@941 -- # uname 00:09:43.237 02:54:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:43.237 02:54:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79585 00:09:43.237 killing process with pid 79585 00:09:43.237 Received shutdown signal, test time was about 10.000000 seconds 00:09:43.237 00:09:43.237 Latency(us) 00:09:43.238 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:43.238 =================================================================================================================== 00:09:43.238 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:43.238 02:54:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:09:43.238 02:54:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:09:43.238 02:54:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79585' 00:09:43.238 02:54:22 -- common/autotest_common.sh@955 -- # kill 79585 00:09:43.238 02:54:22 -- common/autotest_common.sh@960 -- # wait 79585 00:09:43.496 02:54:22 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.755 02:54:22 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:43.755 02:54:22 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:44.014 02:54:22 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:44.014 02:54:22 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:44.014 02:54:22 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 79246 00:09:44.014 02:54:22 -- target/nvmf_lvs_grow.sh@74 -- # wait 79246 00:09:44.014 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 79246 Killed "${NVMF_APP[@]}" "$@" 00:09:44.014 02:54:23 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:44.014 02:54:23 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:44.014 02:54:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:44.014 02:54:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:44.014 02:54:23 -- common/autotest_common.sh@10 -- # set +x 00:09:44.014 02:54:23 -- nvmf/common.sh@470 -- # nvmfpid=79740 00:09:44.014 02:54:23 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:44.014 02:54:23 -- nvmf/common.sh@471 -- # waitforlisten 79740 00:09:44.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.014 02:54:23 -- common/autotest_common.sh@817 -- # '[' -z 79740 ']' 00:09:44.014 02:54:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.014 02:54:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:44.014 02:54:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.014 02:54:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:44.014 02:54:23 -- common/autotest_common.sh@10 -- # set +x 00:09:44.014 [2024-04-23 02:54:23.066345] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:44.014 [2024-04-23 02:54:23.066753] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:44.272 [2024-04-23 02:54:23.194712] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:44.272 [2024-04-23 02:54:23.209551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.272 [2024-04-23 02:54:23.248784] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.272 [2024-04-23 02:54:23.248881] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.272 [2024-04-23 02:54:23.248897] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.272 [2024-04-23 02:54:23.248907] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.272 [2024-04-23 02:54:23.248915] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.272 [2024-04-23 02:54:23.248973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.272 02:54:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:44.272 02:54:23 -- common/autotest_common.sh@850 -- # return 0 00:09:44.272 02:54:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:44.272 02:54:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:44.272 02:54:23 -- common/autotest_common.sh@10 -- # set +x 00:09:44.272 02:54:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.272 02:54:23 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.531 [2024-04-23 02:54:23.628130] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:44.531 [2024-04-23 02:54:23.628476] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:44.531 [2024-04-23 02:54:23.628648] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:44.531 02:54:23 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:44.531 02:54:23 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:44.531 02:54:23 -- common/autotest_common.sh@885 -- # local bdev_name=1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:44.531 02:54:23 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:44.531 02:54:23 -- common/autotest_common.sh@887 -- # local i 00:09:44.531 02:54:23 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:44.531 02:54:23 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:44.531 02:54:23 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:45.099 02:54:23 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ea2d57b-a537-4a91-9362-d239c955a5bd -t 2000 00:09:45.099 [ 00:09:45.099 { 00:09:45.099 "name": "1ea2d57b-a537-4a91-9362-d239c955a5bd", 00:09:45.099 "aliases": [ 00:09:45.099 "lvs/lvol" 00:09:45.099 ], 00:09:45.099 "product_name": "Logical Volume", 00:09:45.099 "block_size": 4096, 00:09:45.099 "num_blocks": 38912, 00:09:45.099 "uuid": "1ea2d57b-a537-4a91-9362-d239c955a5bd", 00:09:45.099 "assigned_rate_limits": { 00:09:45.099 "rw_ios_per_sec": 0, 00:09:45.099 "rw_mbytes_per_sec": 0, 00:09:45.099 "r_mbytes_per_sec": 0, 00:09:45.099 "w_mbytes_per_sec": 0 00:09:45.099 }, 00:09:45.099 "claimed": false, 00:09:45.099 "zoned": false, 00:09:45.099 "supported_io_types": { 00:09:45.099 "read": true, 00:09:45.099 "write": true, 00:09:45.099 "unmap": true, 00:09:45.099 "write_zeroes": true, 00:09:45.099 "flush": false, 00:09:45.099 "reset": true, 00:09:45.099 "compare": false, 00:09:45.099 "compare_and_write": false, 00:09:45.099 "abort": false, 00:09:45.099 "nvme_admin": false, 00:09:45.099 "nvme_io": false 00:09:45.099 }, 00:09:45.099 "driver_specific": { 00:09:45.099 "lvol": { 00:09:45.099 "lvol_store_uuid": "f708d971-d0d3-4620-8277-8d7f5fa32ed5", 00:09:45.099 "base_bdev": "aio_bdev", 00:09:45.099 "thin_provision": false, 00:09:45.099 "snapshot": false, 00:09:45.099 "clone": false, 00:09:45.099 "esnap_clone": false 00:09:45.099 } 00:09:45.099 } 00:09:45.099 } 00:09:45.099 ] 00:09:45.099 02:54:24 -- common/autotest_common.sh@893 -- # return 0 00:09:45.099 02:54:24 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:45.099 02:54:24 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:45.358 02:54:24 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:45.358 02:54:24 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:45.358 02:54:24 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:45.617 02:54:24 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:45.617 02:54:24 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:45.877 [2024-04-23 02:54:24.954169] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:45.877 02:54:24 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:45.877 02:54:24 -- common/autotest_common.sh@638 -- # local es=0 00:09:45.877 02:54:24 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:45.877 02:54:24 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.877 02:54:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.877 02:54:24 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.877 02:54:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.877 02:54:24 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.877 02:54:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:09:45.877 02:54:24 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:45.877 02:54:24 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:45.877 02:54:24 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:46.150 request: 00:09:46.150 { 00:09:46.150 "uuid": "f708d971-d0d3-4620-8277-8d7f5fa32ed5", 00:09:46.150 "method": "bdev_lvol_get_lvstores", 00:09:46.150 "req_id": 1 00:09:46.150 } 00:09:46.150 Got JSON-RPC error response 00:09:46.150 response: 00:09:46.150 { 00:09:46.150 "code": -19, 00:09:46.150 "message": "No such device" 00:09:46.150 } 00:09:46.150 02:54:25 -- common/autotest_common.sh@641 -- # es=1 00:09:46.150 02:54:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:09:46.150 02:54:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:09:46.150 02:54:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:09:46.150 02:54:25 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.409 aio_bdev 00:09:46.409 02:54:25 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:46.409 02:54:25 -- common/autotest_common.sh@885 -- # local bdev_name=1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:46.409 02:54:25 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:09:46.409 02:54:25 -- common/autotest_common.sh@887 -- # local i 00:09:46.409 02:54:25 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:09:46.409 02:54:25 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:09:46.409 02:54:25 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:46.668 02:54:25 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 1ea2d57b-a537-4a91-9362-d239c955a5bd -t 2000 00:09:46.927 [ 00:09:46.927 { 00:09:46.927 "name": "1ea2d57b-a537-4a91-9362-d239c955a5bd", 00:09:46.927 "aliases": [ 00:09:46.927 "lvs/lvol" 00:09:46.927 ], 00:09:46.927 "product_name": "Logical Volume", 00:09:46.927 "block_size": 4096, 00:09:46.927 "num_blocks": 38912, 00:09:46.927 "uuid": "1ea2d57b-a537-4a91-9362-d239c955a5bd", 00:09:46.927 "assigned_rate_limits": { 00:09:46.927 "rw_ios_per_sec": 0, 00:09:46.927 "rw_mbytes_per_sec": 0, 00:09:46.927 "r_mbytes_per_sec": 0, 00:09:46.927 "w_mbytes_per_sec": 0 00:09:46.927 }, 00:09:46.927 "claimed": false, 00:09:46.927 "zoned": false, 00:09:46.927 "supported_io_types": { 00:09:46.927 "read": true, 00:09:46.927 "write": true, 00:09:46.927 "unmap": true, 00:09:46.927 "write_zeroes": true, 00:09:46.927 "flush": false, 00:09:46.927 "reset": true, 00:09:46.927 "compare": false, 00:09:46.927 "compare_and_write": false, 00:09:46.927 "abort": false, 00:09:46.927 "nvme_admin": false, 00:09:46.927 "nvme_io": false 00:09:46.927 }, 00:09:46.927 "driver_specific": { 00:09:46.927 "lvol": { 00:09:46.927 "lvol_store_uuid": "f708d971-d0d3-4620-8277-8d7f5fa32ed5", 00:09:46.927 "base_bdev": "aio_bdev", 00:09:46.927 "thin_provision": false, 00:09:46.927 "snapshot": false, 00:09:46.927 "clone": false, 00:09:46.927 "esnap_clone": false 00:09:46.927 } 00:09:46.927 } 00:09:46.927 } 00:09:46.927 ] 00:09:47.187 02:54:26 -- common/autotest_common.sh@893 -- # return 0 00:09:47.187 02:54:26 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:47.187 02:54:26 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:47.446 02:54:26 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:47.446 02:54:26 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:47.446 02:54:26 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:47.705 02:54:26 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:47.705 02:54:26 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 1ea2d57b-a537-4a91-9362-d239c955a5bd 00:09:47.965 02:54:26 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f708d971-d0d3-4620-8277-8d7f5fa32ed5 00:09:48.224 02:54:27 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:48.224 02:54:27 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:48.792 ************************************ 00:09:48.792 END TEST lvs_grow_dirty 00:09:48.792 ************************************ 00:09:48.792 00:09:48.792 real 0m19.551s 00:09:48.792 user 0m39.612s 00:09:48.792 sys 0m8.747s 00:09:48.792 02:54:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:48.792 02:54:27 -- common/autotest_common.sh@10 -- # set +x 00:09:48.792 02:54:27 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:48.792 02:54:27 -- common/autotest_common.sh@794 -- # type=--id 00:09:48.792 02:54:27 -- common/autotest_common.sh@795 -- # id=0 00:09:48.792 02:54:27 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:09:48.792 02:54:27 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:48.792 02:54:27 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:09:48.792 02:54:27 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:09:48.792 02:54:27 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:09:48.792 02:54:27 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:48.792 nvmf_trace.0 00:09:48.792 02:54:27 -- common/autotest_common.sh@809 -- # return 0 00:09:48.792 02:54:27 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:48.792 02:54:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:48.792 02:54:27 -- nvmf/common.sh@117 -- # sync 00:09:49.051 02:54:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:49.051 02:54:28 -- nvmf/common.sh@120 -- # set +e 00:09:49.051 02:54:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:49.051 02:54:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:49.051 rmmod nvme_tcp 00:09:49.051 rmmod nvme_fabrics 00:09:49.051 rmmod nvme_keyring 00:09:49.051 02:54:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:49.051 02:54:28 -- nvmf/common.sh@124 -- # set -e 00:09:49.051 02:54:28 -- nvmf/common.sh@125 -- # return 0 00:09:49.051 02:54:28 -- nvmf/common.sh@478 -- # '[' -n 79740 ']' 00:09:49.051 02:54:28 -- nvmf/common.sh@479 -- # killprocess 79740 00:09:49.051 02:54:28 -- common/autotest_common.sh@936 -- # '[' -z 79740 ']' 00:09:49.051 02:54:28 -- common/autotest_common.sh@940 -- # kill -0 79740 00:09:49.051 02:54:28 -- common/autotest_common.sh@941 -- # uname 00:09:49.051 02:54:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:49.051 02:54:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79740 00:09:49.051 killing process with pid 79740 00:09:49.051 02:54:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:49.051 02:54:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:49.051 02:54:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79740' 00:09:49.051 02:54:28 -- common/autotest_common.sh@955 -- # kill 79740 00:09:49.051 02:54:28 -- common/autotest_common.sh@960 -- # wait 79740 00:09:49.310 02:54:28 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:49.310 02:54:28 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:49.310 02:54:28 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:49.310 02:54:28 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:49.310 02:54:28 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:49.310 02:54:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.310 02:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.310 02:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.310 02:54:28 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:49.310 00:09:49.310 real 0m38.266s 00:09:49.310 user 1m1.208s 00:09:49.310 sys 0m11.743s 00:09:49.310 02:54:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:49.310 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.310 ************************************ 00:09:49.310 END TEST nvmf_lvs_grow 00:09:49.310 ************************************ 00:09:49.310 02:54:28 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.310 02:54:28 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:49.310 02:54:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.310 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.310 ************************************ 00:09:49.310 START TEST nvmf_bdev_io_wait 00:09:49.310 ************************************ 00:09:49.310 02:54:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:49.310 * Looking for test storage... 00:09:49.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:49.570 02:54:28 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:49.570 02:54:28 -- nvmf/common.sh@7 -- # uname -s 00:09:49.570 02:54:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.570 02:54:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.570 02:54:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.570 02:54:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.570 02:54:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.570 02:54:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.570 02:54:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.570 02:54:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.570 02:54:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.570 02:54:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:49.570 02:54:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:49.570 02:54:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.570 02:54:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.570 02:54:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:49.570 02:54:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.570 02:54:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:49.570 02:54:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.570 02:54:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.570 02:54:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.570 02:54:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.570 02:54:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.570 02:54:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.570 02:54:28 -- paths/export.sh@5 -- # export PATH 00:09:49.570 02:54:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.570 02:54:28 -- nvmf/common.sh@47 -- # : 0 00:09:49.570 02:54:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.570 02:54:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.570 02:54:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.570 02:54:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.570 02:54:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.570 02:54:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.570 02:54:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.570 02:54:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.570 02:54:28 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:49.570 02:54:28 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:49.570 02:54:28 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:49.570 02:54:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:49.570 02:54:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.570 02:54:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:49.570 02:54:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:49.570 02:54:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:49.570 02:54:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.570 02:54:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:49.570 02:54:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.570 02:54:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:49.570 02:54:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:49.570 02:54:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.570 02:54:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.570 02:54:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:49.570 02:54:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:49.570 02:54:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:49.570 02:54:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:49.570 02:54:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:49.570 02:54:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.570 02:54:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:49.570 02:54:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:49.570 02:54:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:49.570 02:54:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:49.570 02:54:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:49.570 02:54:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:49.570 Cannot find device "nvmf_tgt_br" 00:09:49.570 02:54:28 -- nvmf/common.sh@155 -- # true 00:09:49.570 02:54:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:49.570 Cannot find device "nvmf_tgt_br2" 00:09:49.570 02:54:28 -- nvmf/common.sh@156 -- # true 00:09:49.570 02:54:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:49.570 02:54:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:49.570 Cannot find device "nvmf_tgt_br" 00:09:49.570 02:54:28 -- nvmf/common.sh@158 -- # true 00:09:49.570 02:54:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:49.570 Cannot find device "nvmf_tgt_br2" 00:09:49.570 02:54:28 -- nvmf/common.sh@159 -- # true 00:09:49.570 02:54:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:49.570 02:54:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:49.570 02:54:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:49.570 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.570 02:54:28 -- nvmf/common.sh@162 -- # true 00:09:49.570 02:54:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:49.571 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:49.571 02:54:28 -- nvmf/common.sh@163 -- # true 00:09:49.571 02:54:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:49.571 02:54:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:49.571 02:54:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:49.571 02:54:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:49.571 02:54:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:49.571 02:54:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:49.571 02:54:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:49.571 02:54:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:49.571 02:54:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:49.571 02:54:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:49.571 02:54:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:49.830 02:54:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:49.830 02:54:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:49.830 02:54:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:49.830 02:54:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:49.830 02:54:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:49.830 02:54:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:49.830 02:54:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:49.830 02:54:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:49.830 02:54:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:49.830 02:54:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:49.830 02:54:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:49.830 02:54:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:49.830 02:54:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:49.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:09:49.830 00:09:49.830 --- 10.0.0.2 ping statistics --- 00:09:49.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.830 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:09:49.830 02:54:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:49.830 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:49.830 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:49.830 00:09:49.830 --- 10.0.0.3 ping statistics --- 00:09:49.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.830 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:49.830 02:54:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:49.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:09:49.830 00:09:49.830 --- 10.0.0.1 ping statistics --- 00:09:49.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.830 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:09:49.830 02:54:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.830 02:54:28 -- nvmf/common.sh@422 -- # return 0 00:09:49.830 02:54:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:49.831 02:54:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.831 02:54:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:49.831 02:54:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:49.831 02:54:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.831 02:54:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:49.831 02:54:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:49.831 02:54:28 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:49.831 02:54:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:49.831 02:54:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:49.831 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.831 02:54:28 -- nvmf/common.sh@470 -- # nvmfpid=80051 00:09:49.831 02:54:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:49.831 02:54:28 -- nvmf/common.sh@471 -- # waitforlisten 80051 00:09:49.831 02:54:28 -- common/autotest_common.sh@817 -- # '[' -z 80051 ']' 00:09:49.831 02:54:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.831 02:54:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:49.831 02:54:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.831 02:54:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:49.831 02:54:28 -- common/autotest_common.sh@10 -- # set +x 00:09:49.831 [2024-04-23 02:54:28.905681] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:49.831 [2024-04-23 02:54:28.905773] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.089 [2024-04-23 02:54:29.028627] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:50.089 [2024-04-23 02:54:29.045858] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.089 [2024-04-23 02:54:29.090467] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.089 [2024-04-23 02:54:29.090778] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.089 [2024-04-23 02:54:29.090977] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.089 [2024-04-23 02:54:29.091150] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.089 [2024-04-23 02:54:29.091209] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.089 [2024-04-23 02:54:29.091628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.089 [2024-04-23 02:54:29.091760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.089 [2024-04-23 02:54:29.091892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.090 [2024-04-23 02:54:29.091895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:51.025 02:54:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:51.025 02:54:29 -- common/autotest_common.sh@850 -- # return 0 00:09:51.025 02:54:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:51.025 02:54:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:51.025 02:54:29 -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 02:54:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.025 02:54:29 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:51.025 02:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.025 02:54:29 -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 02:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.025 02:54:29 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:51.025 02:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.025 02:54:29 -- common/autotest_common.sh@10 -- # set +x 00:09:51.025 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.025 02:54:30 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:51.026 02:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.026 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 [2024-04-23 02:54:30.007705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.026 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:51.026 02:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.026 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 Malloc0 00:09:51.026 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:51.026 02:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.026 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:51.026 02:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.026 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:51.026 02:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:51.026 02:54:30 -- common/autotest_common.sh@10 -- # set +x 00:09:51.026 [2024-04-23 02:54:30.062174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:51.026 02:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=80091 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # config=() 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # local subsystem config 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@30 -- # READ_PID=80093 00:09:51.026 02:54:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:51.026 { 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme$subsystem", 00:09:51.026 "trtype": "$TEST_TRANSPORT", 00:09:51.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "$NVMF_PORT", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.026 "hdgst": ${hdgst:-false}, 00:09:51.026 "ddgst": ${ddgst:-false} 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 } 00:09:51.026 EOF 00:09:51.026 )") 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=80095 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # config=() 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # local subsystem config 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # cat 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=80098 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@35 -- # sync 00:09:51.026 02:54:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:51.026 { 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme$subsystem", 00:09:51.026 "trtype": "$TEST_TRANSPORT", 00:09:51.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "$NVMF_PORT", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.026 "hdgst": ${hdgst:-false}, 00:09:51.026 "ddgst": ${ddgst:-false} 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 } 00:09:51.026 EOF 00:09:51.026 )") 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # cat 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # config=() 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # local subsystem config 00:09:51.026 02:54:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:51.026 { 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme$subsystem", 00:09:51.026 "trtype": "$TEST_TRANSPORT", 00:09:51.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "$NVMF_PORT", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.026 "hdgst": ${hdgst:-false}, 00:09:51.026 "ddgst": ${ddgst:-false} 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 } 00:09:51.026 EOF 00:09:51.026 )") 00:09:51.026 02:54:30 -- nvmf/common.sh@545 -- # jq . 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # config=() 00:09:51.026 02:54:30 -- nvmf/common.sh@521 -- # local subsystem config 00:09:51.026 02:54:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:09:51.026 02:54:30 -- nvmf/common.sh@545 -- # jq . 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:09:51.026 { 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme$subsystem", 00:09:51.026 "trtype": "$TEST_TRANSPORT", 00:09:51.026 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "$NVMF_PORT", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:51.026 "hdgst": ${hdgst:-false}, 00:09:51.026 "ddgst": ${ddgst:-false} 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 } 00:09:51.026 EOF 00:09:51.026 )") 00:09:51.026 02:54:30 -- nvmf/common.sh@546 -- # IFS=, 00:09:51.026 02:54:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme1", 00:09:51.026 "trtype": "tcp", 00:09:51.026 "traddr": "10.0.0.2", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "4420", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.026 "hdgst": false, 00:09:51.026 "ddgst": false 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 }' 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # cat 00:09:51.026 02:54:30 -- nvmf/common.sh@546 -- # IFS=, 00:09:51.026 02:54:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme1", 00:09:51.026 "trtype": "tcp", 00:09:51.026 "traddr": "10.0.0.2", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "4420", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.026 "hdgst": false, 00:09:51.026 "ddgst": false 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 }' 00:09:51.026 02:54:30 -- nvmf/common.sh@543 -- # cat 00:09:51.026 02:54:30 -- nvmf/common.sh@545 -- # jq . 00:09:51.026 02:54:30 -- nvmf/common.sh@546 -- # IFS=, 00:09:51.026 02:54:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme1", 00:09:51.026 "trtype": "tcp", 00:09:51.026 "traddr": "10.0.0.2", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "4420", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.026 "hdgst": false, 00:09:51.026 "ddgst": false 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 }' 00:09:51.026 02:54:30 -- nvmf/common.sh@545 -- # jq . 00:09:51.026 02:54:30 -- nvmf/common.sh@546 -- # IFS=, 00:09:51.026 02:54:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:09:51.026 "params": { 00:09:51.026 "name": "Nvme1", 00:09:51.026 "trtype": "tcp", 00:09:51.026 "traddr": "10.0.0.2", 00:09:51.026 "adrfam": "ipv4", 00:09:51.026 "trsvcid": "4420", 00:09:51.026 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:51.026 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:51.026 "hdgst": false, 00:09:51.026 "ddgst": false 00:09:51.026 }, 00:09:51.026 "method": "bdev_nvme_attach_controller" 00:09:51.026 }' 00:09:51.026 [2024-04-23 02:54:30.126368] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:51.026 [2024-04-23 02:54:30.126625] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:51.026 [2024-04-23 02:54:30.129270] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:51.026 [2024-04-23 02:54:30.129604] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:51.026 02:54:30 -- target/bdev_io_wait.sh@37 -- # wait 80091 00:09:51.026 [2024-04-23 02:54:30.169667] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:51.026 [2024-04-23 02:54:30.170057] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:51.285 [2024-04-23 02:54:30.189546] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:51.285 [2024-04-23 02:54:30.189655] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:51.286 [2024-04-23 02:54:30.295693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.286 [2024-04-23 02:54:30.309524] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.286 [2024-04-23 02:54:30.330234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:51.286 [2024-04-23 02:54:30.337882] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.286 [2024-04-23 02:54:30.376641] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.286 [2024-04-23 02:54:30.383111] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.286 [2024-04-23 02:54:30.397544] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.286 [2024-04-23 02:54:30.414242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:51.286 [2024-04-23 02:54:30.418860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:51.286 [2024-04-23 02:54:30.425238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:51.545 Running I/O for 1 seconds... 00:09:51.545 [2024-04-23 02:54:30.466995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:51.545 [2024-04-23 02:54:30.501523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:51.545 Running I/O for 1 seconds... 00:09:51.545 Running I/O for 1 seconds... 00:09:51.545 Running I/O for 1 seconds... 00:09:52.561 00:09:52.561 Latency(us) 00:09:52.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.561 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:52.561 Nvme1n1 : 1.00 172779.80 674.92 0.00 0.00 738.15 381.67 908.57 00:09:52.561 =================================================================================================================== 00:09:52.561 Total : 172779.80 674.92 0.00 0.00 738.15 381.67 908.57 00:09:52.561 00:09:52.561 Latency(us) 00:09:52.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.561 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:52.561 Nvme1n1 : 1.01 8950.78 34.96 0.00 0.00 14228.02 9353.77 22997.18 00:09:52.561 =================================================================================================================== 00:09:52.561 Total : 8950.78 34.96 0.00 0.00 14228.02 9353.77 22997.18 00:09:52.561 00:09:52.561 Latency(us) 00:09:52.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.561 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:52.561 Nvme1n1 : 1.01 8114.10 31.70 0.00 0.00 15696.52 8877.15 31457.28 00:09:52.561 =================================================================================================================== 00:09:52.561 Total : 8114.10 31.70 0.00 0.00 15696.52 8877.15 31457.28 00:09:52.561 00:09:52.561 Latency(us) 00:09:52.561 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.561 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:52.561 Nvme1n1 : 1.01 7655.46 29.90 0.00 0.00 16646.14 7804.74 35508.60 00:09:52.561 =================================================================================================================== 00:09:52.561 Total : 7655.46 29.90 0.00 0.00 16646.14 7804.74 35508.60 00:09:52.561 02:54:31 -- target/bdev_io_wait.sh@38 -- # wait 80093 00:09:52.561 02:54:31 -- target/bdev_io_wait.sh@39 -- # wait 80095 00:09:52.561 02:54:31 -- target/bdev_io_wait.sh@40 -- # wait 80098 00:09:52.830 02:54:31 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:52.830 02:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:52.830 02:54:31 -- common/autotest_common.sh@10 -- # set +x 00:09:52.830 02:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:52.830 02:54:31 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:52.830 02:54:31 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:52.830 02:54:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:09:52.830 02:54:31 -- nvmf/common.sh@117 -- # sync 00:09:52.830 02:54:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:52.830 02:54:31 -- nvmf/common.sh@120 -- # set +e 00:09:52.830 02:54:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:52.830 02:54:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:52.830 rmmod nvme_tcp 00:09:52.830 rmmod nvme_fabrics 00:09:52.830 rmmod nvme_keyring 00:09:52.830 02:54:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:52.830 02:54:31 -- nvmf/common.sh@124 -- # set -e 00:09:52.830 02:54:31 -- nvmf/common.sh@125 -- # return 0 00:09:52.830 02:54:31 -- nvmf/common.sh@478 -- # '[' -n 80051 ']' 00:09:52.830 02:54:31 -- nvmf/common.sh@479 -- # killprocess 80051 00:09:52.830 02:54:31 -- common/autotest_common.sh@936 -- # '[' -z 80051 ']' 00:09:52.830 02:54:31 -- common/autotest_common.sh@940 -- # kill -0 80051 00:09:52.830 02:54:31 -- common/autotest_common.sh@941 -- # uname 00:09:52.830 02:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:52.830 02:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80051 00:09:52.830 02:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:52.830 02:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:52.830 killing process with pid 80051 00:09:52.830 02:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80051' 00:09:52.830 02:54:31 -- common/autotest_common.sh@955 -- # kill 80051 00:09:52.830 02:54:31 -- common/autotest_common.sh@960 -- # wait 80051 00:09:53.090 02:54:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:09:53.090 02:54:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:09:53.090 02:54:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:09:53.090 02:54:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:53.090 02:54:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:53.090 02:54:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.090 02:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.090 02:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.090 02:54:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:09:53.090 00:09:53.090 real 0m3.707s 00:09:53.090 user 0m16.123s 00:09:53.090 sys 0m1.977s 00:09:53.090 02:54:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:53.090 ************************************ 00:09:53.090 END TEST nvmf_bdev_io_wait 00:09:53.090 ************************************ 00:09:53.090 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.090 02:54:32 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.090 02:54:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:53.090 02:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.090 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.090 ************************************ 00:09:53.090 START TEST nvmf_queue_depth 00:09:53.090 ************************************ 00:09:53.090 02:54:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:53.349 * Looking for test storage... 00:09:53.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:53.349 02:54:32 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:53.349 02:54:32 -- nvmf/common.sh@7 -- # uname -s 00:09:53.349 02:54:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.349 02:54:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.349 02:54:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.349 02:54:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.349 02:54:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.349 02:54:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.349 02:54:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.349 02:54:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.349 02:54:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.349 02:54:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.349 02:54:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:53.349 02:54:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:09:53.349 02:54:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.349 02:54:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.349 02:54:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:53.350 02:54:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.350 02:54:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:53.350 02:54:32 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.350 02:54:32 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.350 02:54:32 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.350 02:54:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.350 02:54:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.350 02:54:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.350 02:54:32 -- paths/export.sh@5 -- # export PATH 00:09:53.350 02:54:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.350 02:54:32 -- nvmf/common.sh@47 -- # : 0 00:09:53.350 02:54:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.350 02:54:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.350 02:54:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.350 02:54:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.350 02:54:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.350 02:54:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.350 02:54:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.350 02:54:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.350 02:54:32 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:53.350 02:54:32 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:53.350 02:54:32 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:53.350 02:54:32 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:53.350 02:54:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:09:53.350 02:54:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.350 02:54:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:09:53.350 02:54:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:09:53.350 02:54:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:09:53.350 02:54:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.350 02:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:53.350 02:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.350 02:54:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:09:53.350 02:54:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:09:53.350 02:54:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:09:53.350 02:54:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:09:53.350 02:54:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:09:53.350 02:54:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:09:53.350 02:54:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:53.350 02:54:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:53.350 02:54:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:53.350 02:54:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:09:53.350 02:54:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:53.350 02:54:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:53.350 02:54:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:53.350 02:54:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:53.350 02:54:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:53.350 02:54:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:53.350 02:54:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:53.350 02:54:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:53.350 02:54:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:09:53.350 02:54:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:09:53.350 Cannot find device "nvmf_tgt_br" 00:09:53.350 02:54:32 -- nvmf/common.sh@155 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:09:53.350 Cannot find device "nvmf_tgt_br2" 00:09:53.350 02:54:32 -- nvmf/common.sh@156 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:09:53.350 02:54:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:09:53.350 Cannot find device "nvmf_tgt_br" 00:09:53.350 02:54:32 -- nvmf/common.sh@158 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:09:53.350 Cannot find device "nvmf_tgt_br2" 00:09:53.350 02:54:32 -- nvmf/common.sh@159 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:09:53.350 02:54:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:09:53.350 02:54:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:53.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.350 02:54:32 -- nvmf/common.sh@162 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:53.350 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:53.350 02:54:32 -- nvmf/common.sh@163 -- # true 00:09:53.350 02:54:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:09:53.350 02:54:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:53.350 02:54:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:53.350 02:54:32 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:53.350 02:54:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:53.350 02:54:32 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:53.609 02:54:32 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:53.609 02:54:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:53.609 02:54:32 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:53.609 02:54:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:09:53.609 02:54:32 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:09:53.609 02:54:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:09:53.609 02:54:32 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:09:53.609 02:54:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:53.609 02:54:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:53.609 02:54:32 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:53.609 02:54:32 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:09:53.609 02:54:32 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:09:53.609 02:54:32 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:09:53.609 02:54:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:53.609 02:54:32 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:53.609 02:54:32 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:53.609 02:54:32 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:53.609 02:54:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:09:53.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:53.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:09:53.609 00:09:53.609 --- 10.0.0.2 ping statistics --- 00:09:53.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.610 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:09:53.610 02:54:32 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:09:53.610 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:53.610 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.037 ms 00:09:53.610 00:09:53.610 --- 10.0.0.3 ping statistics --- 00:09:53.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.610 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:09:53.610 02:54:32 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:53.610 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:53.610 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:09:53.610 00:09:53.610 --- 10.0.0.1 ping statistics --- 00:09:53.610 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:53.610 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:09:53.610 02:54:32 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:53.610 02:54:32 -- nvmf/common.sh@422 -- # return 0 00:09:53.610 02:54:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:09:53.610 02:54:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:53.610 02:54:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:09:53.610 02:54:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:09:53.610 02:54:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:53.610 02:54:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:09:53.610 02:54:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:09:53.610 02:54:32 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:53.610 02:54:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:09:53.610 02:54:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:09:53.610 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.610 02:54:32 -- nvmf/common.sh@470 -- # nvmfpid=80304 00:09:53.610 02:54:32 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:53.610 02:54:32 -- nvmf/common.sh@471 -- # waitforlisten 80304 00:09:53.610 02:54:32 -- common/autotest_common.sh@817 -- # '[' -z 80304 ']' 00:09:53.610 02:54:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.610 02:54:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:53.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.610 02:54:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.610 02:54:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:53.610 02:54:32 -- common/autotest_common.sh@10 -- # set +x 00:09:53.610 [2024-04-23 02:54:32.733502] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:53.610 [2024-04-23 02:54:32.733610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.869 [2024-04-23 02:54:32.857711] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:53.869 [2024-04-23 02:54:32.875023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.869 [2024-04-23 02:54:32.914836] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:53.869 [2024-04-23 02:54:32.914936] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:53.869 [2024-04-23 02:54:32.914951] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:53.869 [2024-04-23 02:54:32.914961] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:53.869 [2024-04-23 02:54:32.914970] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:53.869 [2024-04-23 02:54:32.915010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:54.806 02:54:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:54.806 02:54:33 -- common/autotest_common.sh@850 -- # return 0 00:09:54.806 02:54:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:09:54.806 02:54:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:09:54.806 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.806 02:54:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:54.806 02:54:33 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:54.806 02:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 [2024-04-23 02:54:33.698858] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:54.807 02:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.807 02:54:33 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:54.807 02:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 Malloc0 00:09:54.807 02:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.807 02:54:33 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:54.807 02:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 02:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.807 02:54:33 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:54.807 02:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 02:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.807 02:54:33 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:54.807 02:54:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 [2024-04-23 02:54:33.760169] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:54.807 02:54:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:54.807 02:54:33 -- target/queue_depth.sh@30 -- # bdevperf_pid=80338 00:09:54.807 02:54:33 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:54.807 02:54:33 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:54.807 02:54:33 -- target/queue_depth.sh@33 -- # waitforlisten 80338 /var/tmp/bdevperf.sock 00:09:54.807 02:54:33 -- common/autotest_common.sh@817 -- # '[' -z 80338 ']' 00:09:54.807 02:54:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:54.807 02:54:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:09:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:54.807 02:54:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:54.807 02:54:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:09:54.807 02:54:33 -- common/autotest_common.sh@10 -- # set +x 00:09:54.807 [2024-04-23 02:54:33.818207] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:09:54.807 [2024-04-23 02:54:33.818330] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80338 ] 00:09:54.807 [2024-04-23 02:54:33.941781] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:09:54.807 [2024-04-23 02:54:33.954044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:55.065 [2024-04-23 02:54:33.991812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.065 02:54:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:09:55.065 02:54:34 -- common/autotest_common.sh@850 -- # return 0 00:09:55.065 02:54:34 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:55.065 02:54:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:09:55.065 02:54:34 -- common/autotest_common.sh@10 -- # set +x 00:09:55.065 NVMe0n1 00:09:55.065 02:54:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:09:55.065 02:54:34 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:55.323 Running I/O for 10 seconds... 00:10:05.299 00:10:05.299 Latency(us) 00:10:05.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.300 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:05.300 Verification LBA range: start 0x0 length 0x4000 00:10:05.300 NVMe0n1 : 10.07 9068.18 35.42 0.00 0.00 112427.23 25856.93 95325.09 00:10:05.300 =================================================================================================================== 00:10:05.300 Total : 9068.18 35.42 0.00 0.00 112427.23 25856.93 95325.09 00:10:05.300 0 00:10:05.300 02:54:44 -- target/queue_depth.sh@39 -- # killprocess 80338 00:10:05.300 02:54:44 -- common/autotest_common.sh@936 -- # '[' -z 80338 ']' 00:10:05.300 02:54:44 -- common/autotest_common.sh@940 -- # kill -0 80338 00:10:05.300 02:54:44 -- common/autotest_common.sh@941 -- # uname 00:10:05.300 02:54:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.300 02:54:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80338 00:10:05.300 killing process with pid 80338 00:10:05.300 Received shutdown signal, test time was about 10.000000 seconds 00:10:05.300 00:10:05.300 Latency(us) 00:10:05.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.300 =================================================================================================================== 00:10:05.300 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:05.300 02:54:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:05.300 02:54:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:05.300 02:54:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80338' 00:10:05.300 02:54:44 -- common/autotest_common.sh@955 -- # kill 80338 00:10:05.300 02:54:44 -- common/autotest_common.sh@960 -- # wait 80338 00:10:05.558 02:54:44 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:05.558 02:54:44 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:05.558 02:54:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:05.558 02:54:44 -- nvmf/common.sh@117 -- # sync 00:10:05.558 02:54:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.558 02:54:44 -- nvmf/common.sh@120 -- # set +e 00:10:05.558 02:54:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.558 02:54:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.558 rmmod nvme_tcp 00:10:05.558 rmmod nvme_fabrics 00:10:05.558 rmmod nvme_keyring 00:10:05.558 02:54:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.558 02:54:44 -- nvmf/common.sh@124 -- # set -e 00:10:05.558 02:54:44 -- nvmf/common.sh@125 -- # return 0 00:10:05.558 02:54:44 -- nvmf/common.sh@478 -- # '[' -n 80304 ']' 00:10:05.558 02:54:44 -- nvmf/common.sh@479 -- # killprocess 80304 00:10:05.558 02:54:44 -- common/autotest_common.sh@936 -- # '[' -z 80304 ']' 00:10:05.558 02:54:44 -- common/autotest_common.sh@940 -- # kill -0 80304 00:10:05.558 02:54:44 -- common/autotest_common.sh@941 -- # uname 00:10:05.558 02:54:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:05.558 02:54:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80304 00:10:05.558 killing process with pid 80304 00:10:05.558 02:54:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:05.558 02:54:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:05.558 02:54:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80304' 00:10:05.558 02:54:44 -- common/autotest_common.sh@955 -- # kill 80304 00:10:05.558 02:54:44 -- common/autotest_common.sh@960 -- # wait 80304 00:10:05.817 02:54:44 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:05.817 02:54:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:05.817 02:54:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:05.817 02:54:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.817 02:54:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.817 02:54:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.817 02:54:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:05.817 02:54:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:05.817 02:54:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:05.817 00:10:05.817 real 0m12.661s 00:10:05.817 user 0m21.786s 00:10:05.817 sys 0m1.942s 00:10:05.817 02:54:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:05.817 ************************************ 00:10:05.817 END TEST nvmf_queue_depth 00:10:05.817 02:54:44 -- common/autotest_common.sh@10 -- # set +x 00:10:05.817 ************************************ 00:10:05.817 02:54:44 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:05.817 02:54:44 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:05.817 02:54:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:05.817 02:54:44 -- common/autotest_common.sh@10 -- # set +x 00:10:06.075 ************************************ 00:10:06.075 START TEST nvmf_multipath 00:10:06.075 ************************************ 00:10:06.075 02:54:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.075 * Looking for test storage... 00:10:06.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:06.075 02:54:45 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:06.075 02:54:45 -- nvmf/common.sh@7 -- # uname -s 00:10:06.075 02:54:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.075 02:54:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.075 02:54:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.075 02:54:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.075 02:54:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.075 02:54:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.075 02:54:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.075 02:54:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.075 02:54:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.075 02:54:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.075 02:54:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:06.075 02:54:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:06.075 02:54:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.075 02:54:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.075 02:54:45 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:06.076 02:54:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.076 02:54:45 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:06.076 02:54:45 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.076 02:54:45 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.076 02:54:45 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.076 02:54:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.076 02:54:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.076 02:54:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.076 02:54:45 -- paths/export.sh@5 -- # export PATH 00:10:06.076 02:54:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.076 02:54:45 -- nvmf/common.sh@47 -- # : 0 00:10:06.076 02:54:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.076 02:54:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.076 02:54:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.076 02:54:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.076 02:54:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.076 02:54:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.076 02:54:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.076 02:54:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.076 02:54:45 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.076 02:54:45 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.076 02:54:45 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:06.076 02:54:45 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:06.076 02:54:45 -- target/multipath.sh@43 -- # nvmftestinit 00:10:06.076 02:54:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:06.076 02:54:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.076 02:54:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:06.076 02:54:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:06.076 02:54:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:06.076 02:54:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.076 02:54:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:06.076 02:54:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.076 02:54:45 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:06.076 02:54:45 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:06.076 02:54:45 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:06.076 02:54:45 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:06.076 02:54:45 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:06.076 02:54:45 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:06.076 02:54:45 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:06.076 02:54:45 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:06.076 02:54:45 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:06.076 02:54:45 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:06.076 02:54:45 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:06.076 02:54:45 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:06.076 02:54:45 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:06.076 02:54:45 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:06.076 02:54:45 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:06.076 02:54:45 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:06.076 02:54:45 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:06.076 02:54:45 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:06.076 02:54:45 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:06.076 02:54:45 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:06.076 Cannot find device "nvmf_tgt_br" 00:10:06.076 02:54:45 -- nvmf/common.sh@155 -- # true 00:10:06.076 02:54:45 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:06.076 Cannot find device "nvmf_tgt_br2" 00:10:06.076 02:54:45 -- nvmf/common.sh@156 -- # true 00:10:06.076 02:54:45 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:06.076 02:54:45 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:06.076 Cannot find device "nvmf_tgt_br" 00:10:06.076 02:54:45 -- nvmf/common.sh@158 -- # true 00:10:06.076 02:54:45 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:06.076 Cannot find device "nvmf_tgt_br2" 00:10:06.076 02:54:45 -- nvmf/common.sh@159 -- # true 00:10:06.076 02:54:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:06.076 02:54:45 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:06.334 02:54:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:06.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.334 02:54:45 -- nvmf/common.sh@162 -- # true 00:10:06.334 02:54:45 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:06.334 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:06.334 02:54:45 -- nvmf/common.sh@163 -- # true 00:10:06.334 02:54:45 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:06.334 02:54:45 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:06.334 02:54:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:06.334 02:54:45 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:06.334 02:54:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:06.334 02:54:45 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:06.334 02:54:45 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:06.334 02:54:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:06.334 02:54:45 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:06.334 02:54:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:06.334 02:54:45 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:06.334 02:54:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:06.334 02:54:45 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:06.335 02:54:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:06.335 02:54:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:06.335 02:54:45 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:06.335 02:54:45 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:06.335 02:54:45 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:06.335 02:54:45 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:06.335 02:54:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:06.335 02:54:45 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:06.335 02:54:45 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:06.335 02:54:45 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:06.335 02:54:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:06.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:06.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:10:06.335 00:10:06.335 --- 10.0.0.2 ping statistics --- 00:10:06.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.335 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:10:06.335 02:54:45 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:06.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:06.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:06.335 00:10:06.335 --- 10.0.0.3 ping statistics --- 00:10:06.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.335 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:06.335 02:54:45 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:06.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:06.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:10:06.335 00:10:06.335 --- 10.0.0.1 ping statistics --- 00:10:06.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:06.335 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:06.335 02:54:45 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:06.335 02:54:45 -- nvmf/common.sh@422 -- # return 0 00:10:06.335 02:54:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:06.335 02:54:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:06.335 02:54:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:06.335 02:54:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:06.335 02:54:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:06.335 02:54:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:06.335 02:54:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:06.593 02:54:45 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:06.593 02:54:45 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:06.593 02:54:45 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:06.593 02:54:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:06.593 02:54:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:06.593 02:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.593 02:54:45 -- nvmf/common.sh@470 -- # nvmfpid=80652 00:10:06.593 02:54:45 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:06.593 02:54:45 -- nvmf/common.sh@471 -- # waitforlisten 80652 00:10:06.593 02:54:45 -- common/autotest_common.sh@817 -- # '[' -z 80652 ']' 00:10:06.593 02:54:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:06.593 02:54:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:06.593 02:54:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:06.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:06.593 02:54:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:06.593 02:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.593 [2024-04-23 02:54:45.573919] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:06.593 [2024-04-23 02:54:45.574054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:06.593 [2024-04-23 02:54:45.701109] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:06.593 [2024-04-23 02:54:45.718298] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:06.851 [2024-04-23 02:54:45.761578] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:06.851 [2024-04-23 02:54:45.761650] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:06.851 [2024-04-23 02:54:45.761665] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:06.851 [2024-04-23 02:54:45.761675] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:06.851 [2024-04-23 02:54:45.761684] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:06.851 [2024-04-23 02:54:45.761807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:06.852 [2024-04-23 02:54:45.762213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:06.852 [2024-04-23 02:54:45.762563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:06.852 [2024-04-23 02:54:45.762651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.852 02:54:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:06.852 02:54:45 -- common/autotest_common.sh@850 -- # return 0 00:10:06.852 02:54:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:06.852 02:54:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:06.852 02:54:45 -- common/autotest_common.sh@10 -- # set +x 00:10:06.852 02:54:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:06.852 02:54:45 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:07.110 [2024-04-23 02:54:46.136743] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:07.110 02:54:46 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:07.369 Malloc0 00:10:07.369 02:54:46 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:07.627 02:54:46 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.194 02:54:47 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.194 [2024-04-23 02:54:47.312283] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.194 02:54:47 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:08.453 [2024-04-23 02:54:47.584256] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:08.453 02:54:47 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:08.719 02:54:47 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:08.719 02:54:47 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:08.719 02:54:47 -- common/autotest_common.sh@1184 -- # local i=0 00:10:08.719 02:54:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:08.719 02:54:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:08.719 02:54:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:11.264 02:54:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:11.264 02:54:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:11.264 02:54:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:11.264 02:54:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:11.264 02:54:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:11.264 02:54:49 -- common/autotest_common.sh@1194 -- # return 0 00:10:11.264 02:54:49 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:11.264 02:54:49 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:11.264 02:54:49 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:11.264 02:54:49 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:11.264 02:54:49 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:11.264 02:54:49 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:11.264 02:54:49 -- target/multipath.sh@38 -- # return 0 00:10:11.264 02:54:49 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:11.264 02:54:49 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:11.264 02:54:49 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:11.264 02:54:49 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:11.264 02:54:49 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:11.264 02:54:49 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:11.265 02:54:49 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:11.265 02:54:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:11.265 02:54:49 -- target/multipath.sh@22 -- # local timeout=20 00:10:11.265 02:54:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:11.265 02:54:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:11.265 02:54:49 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:11.265 02:54:49 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:11.265 02:54:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:11.265 02:54:49 -- target/multipath.sh@22 -- # local timeout=20 00:10:11.265 02:54:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:11.265 02:54:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:11.265 02:54:49 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:11.265 02:54:49 -- target/multipath.sh@85 -- # echo numa 00:10:11.265 02:54:49 -- target/multipath.sh@88 -- # fio_pid=80741 00:10:11.265 02:54:49 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:11.265 02:54:49 -- target/multipath.sh@90 -- # sleep 1 00:10:11.265 [global] 00:10:11.265 thread=1 00:10:11.265 invalidate=1 00:10:11.265 rw=randrw 00:10:11.265 time_based=1 00:10:11.265 runtime=6 00:10:11.265 ioengine=libaio 00:10:11.265 direct=1 00:10:11.265 bs=4096 00:10:11.265 iodepth=128 00:10:11.265 norandommap=0 00:10:11.265 numjobs=1 00:10:11.265 00:10:11.265 verify_dump=1 00:10:11.265 verify_backlog=512 00:10:11.265 verify_state_save=0 00:10:11.265 do_verify=1 00:10:11.265 verify=crc32c-intel 00:10:11.265 [job0] 00:10:11.265 filename=/dev/nvme0n1 00:10:11.265 Could not set queue depth (nvme0n1) 00:10:11.265 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.265 fio-3.35 00:10:11.265 Starting 1 thread 00:10:11.832 02:54:50 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:12.091 02:54:51 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:12.349 02:54:51 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:12.349 02:54:51 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:12.349 02:54:51 -- target/multipath.sh@22 -- # local timeout=20 00:10:12.349 02:54:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:12.349 02:54:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:12.349 02:54:51 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:12.349 02:54:51 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:12.349 02:54:51 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:12.349 02:54:51 -- target/multipath.sh@22 -- # local timeout=20 00:10:12.349 02:54:51 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:12.349 02:54:51 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:12.349 02:54:51 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:12.349 02:54:51 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:12.608 02:54:51 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:13.176 02:54:52 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:13.176 02:54:52 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:13.176 02:54:52 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.176 02:54:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:13.176 02:54:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:13.176 02:54:52 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:13.176 02:54:52 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:13.176 02:54:52 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:13.176 02:54:52 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.176 02:54:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:13.176 02:54:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:13.176 02:54:52 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:13.176 02:54:52 -- target/multipath.sh@104 -- # wait 80741 00:10:17.364 00:10:17.364 job0: (groupid=0, jobs=1): err= 0: pid=80762: Tue Apr 23 02:54:56 2024 00:10:17.364 read: IOPS=9879, BW=38.6MiB/s (40.5MB/s)(232MiB/6006msec) 00:10:17.364 slat (usec): min=4, max=7448, avg=59.25, stdev=234.60 00:10:17.364 clat (usec): min=1627, max=18099, avg=8865.70, stdev=1688.92 00:10:17.364 lat (usec): min=1674, max=18134, avg=8924.95, stdev=1694.75 00:10:17.364 clat percentiles (usec): 00:10:17.364 | 1.00th=[ 4555], 5.00th=[ 6652], 10.00th=[ 7439], 20.00th=[ 7963], 00:10:17.364 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:10:17.364 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[11076], 95.00th=[12649], 00:10:17.364 | 99.00th=[14091], 99.50th=[14615], 99.90th=[16909], 99.95th=[17171], 00:10:17.364 | 99.99th=[17433] 00:10:17.364 bw ( KiB/s): min= 4936, max=26664, per=50.56%, avg=19981.82, stdev=6614.43, samples=11 00:10:17.364 iops : min= 1234, max= 6666, avg=4995.45, stdev=1653.61, samples=11 00:10:17.364 write: IOPS=5627, BW=22.0MiB/s (23.0MB/s)(119MiB/5405msec); 0 zone resets 00:10:17.364 slat (usec): min=7, max=7559, avg=70.01, stdev=169.64 00:10:17.364 clat (usec): min=1708, max=16856, avg=7750.93, stdev=1543.99 00:10:17.364 lat (usec): min=1746, max=16883, avg=7820.95, stdev=1551.40 00:10:17.364 clat percentiles (usec): 00:10:17.364 | 1.00th=[ 3490], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 7046], 00:10:17.364 | 30.00th=[ 7373], 40.00th=[ 7635], 50.00th=[ 7832], 60.00th=[ 8029], 00:10:17.364 | 70.00th=[ 8291], 80.00th=[ 8586], 90.00th=[ 9241], 95.00th=[10159], 00:10:17.364 | 99.00th=[12125], 99.50th=[12911], 99.90th=[15664], 99.95th=[15926], 00:10:17.364 | 99.99th=[16450] 00:10:17.364 bw ( KiB/s): min= 5208, max=26304, per=89.03%, avg=20041.45, stdev=6337.09, samples=11 00:10:17.364 iops : min= 1302, max= 6576, avg=5010.36, stdev=1584.27, samples=11 00:10:17.364 lat (msec) : 2=0.03%, 4=1.20%, 10=87.19%, 20=11.58% 00:10:17.364 cpu : usr=6.08%, sys=23.33%, ctx=5182, majf=0, minf=108 00:10:17.364 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:17.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:17.364 issued rwts: total=59335,30416,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:17.365 00:10:17.365 Run status group 0 (all jobs): 00:10:17.365 READ: bw=38.6MiB/s (40.5MB/s), 38.6MiB/s-38.6MiB/s (40.5MB/s-40.5MB/s), io=232MiB (243MB), run=6006-6006msec 00:10:17.365 WRITE: bw=22.0MiB/s (23.0MB/s), 22.0MiB/s-22.0MiB/s (23.0MB/s-23.0MB/s), io=119MiB (125MB), run=5405-5405msec 00:10:17.365 00:10:17.365 Disk stats (read/write): 00:10:17.365 nvme0n1: ios=58686/29579, merge=0/0, ticks=499005/214863, in_queue=713868, util=98.67% 00:10:17.365 02:54:56 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:17.365 02:54:56 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:17.624 02:54:56 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:17.624 02:54:56 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:17.624 02:54:56 -- target/multipath.sh@22 -- # local timeout=20 00:10:17.624 02:54:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:17.624 02:54:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:17.624 02:54:56 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:17.624 02:54:56 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:17.624 02:54:56 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:17.624 02:54:56 -- target/multipath.sh@22 -- # local timeout=20 00:10:17.624 02:54:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:17.624 02:54:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:17.624 02:54:56 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:17.624 02:54:56 -- target/multipath.sh@113 -- # echo round-robin 00:10:17.624 02:54:56 -- target/multipath.sh@116 -- # fio_pid=80836 00:10:17.624 02:54:56 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:17.624 02:54:56 -- target/multipath.sh@118 -- # sleep 1 00:10:17.624 [global] 00:10:17.624 thread=1 00:10:17.624 invalidate=1 00:10:17.624 rw=randrw 00:10:17.624 time_based=1 00:10:17.624 runtime=6 00:10:17.624 ioengine=libaio 00:10:17.624 direct=1 00:10:17.624 bs=4096 00:10:17.624 iodepth=128 00:10:17.624 norandommap=0 00:10:17.624 numjobs=1 00:10:17.624 00:10:17.624 verify_dump=1 00:10:17.624 verify_backlog=512 00:10:17.624 verify_state_save=0 00:10:17.624 do_verify=1 00:10:17.624 verify=crc32c-intel 00:10:17.624 [job0] 00:10:17.624 filename=/dev/nvme0n1 00:10:17.624 Could not set queue depth (nvme0n1) 00:10:17.882 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:17.882 fio-3.35 00:10:17.882 Starting 1 thread 00:10:18.825 02:54:57 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:19.134 02:54:57 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:19.134 02:54:58 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:19.134 02:54:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:19.134 02:54:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:19.134 02:54:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:19.134 02:54:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:19.134 02:54:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:19.134 02:54:58 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:19.134 02:54:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:19.134 02:54:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:19.134 02:54:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:19.134 02:54:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:19.134 02:54:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:19.134 02:54:58 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:19.403 02:54:58 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:19.662 02:54:58 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:19.662 02:54:58 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:19.662 02:54:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:19.662 02:54:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:19.662 02:54:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:19.662 02:54:58 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:19.662 02:54:58 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:19.662 02:54:58 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:19.662 02:54:58 -- target/multipath.sh@22 -- # local timeout=20 00:10:19.662 02:54:58 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:19.662 02:54:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:19.662 02:54:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:19.662 02:54:58 -- target/multipath.sh@132 -- # wait 80836 00:10:24.931 00:10:24.931 job0: (groupid=0, jobs=1): err= 0: pid=80859: Tue Apr 23 02:55:03 2024 00:10:24.931 read: IOPS=11.1k, BW=43.2MiB/s (45.3MB/s)(259MiB/6002msec) 00:10:24.931 slat (usec): min=5, max=5661, avg=46.15, stdev=194.22 00:10:24.931 clat (usec): min=347, max=15706, avg=7893.16, stdev=1915.48 00:10:24.931 lat (usec): min=361, max=15718, avg=7939.31, stdev=1930.14 00:10:24.931 clat percentiles (usec): 00:10:24.931 | 1.00th=[ 2999], 5.00th=[ 4555], 10.00th=[ 5276], 20.00th=[ 6259], 00:10:24.931 | 30.00th=[ 7439], 40.00th=[ 7963], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:24.931 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11469], 00:10:24.931 | 99.00th=[13042], 99.50th=[13304], 99.90th=[13829], 99.95th=[14091], 00:10:24.931 | 99.99th=[14877] 00:10:24.931 bw ( KiB/s): min= 7744, max=39344, per=53.22%, avg=23551.27, stdev=9164.20, samples=11 00:10:24.931 iops : min= 1936, max= 9836, avg=5887.82, stdev=2291.05, samples=11 00:10:24.931 write: IOPS=6648, BW=26.0MiB/s (27.2MB/s)(139MiB/5351msec); 0 zone resets 00:10:24.931 slat (usec): min=15, max=1617, avg=56.37, stdev=136.32 00:10:24.931 clat (usec): min=323, max=14663, avg=6681.27, stdev=1794.15 00:10:24.931 lat (usec): min=392, max=14689, avg=6737.64, stdev=1808.69 00:10:24.931 clat percentiles (usec): 00:10:24.931 | 1.00th=[ 2802], 5.00th=[ 3523], 10.00th=[ 4015], 20.00th=[ 4686], 00:10:24.931 | 30.00th=[ 5538], 40.00th=[ 6980], 50.00th=[ 7373], 60.00th=[ 7635], 00:10:24.931 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:24.931 | 99.00th=[10552], 99.50th=[11600], 99.90th=[12649], 99.95th=[13042], 00:10:24.931 | 99.99th=[13829] 00:10:24.931 bw ( KiB/s): min= 8192, max=38552, per=88.61%, avg=23565.82, stdev=8952.52, samples=11 00:10:24.931 iops : min= 2048, max= 9638, avg=5891.45, stdev=2238.13, samples=11 00:10:24.931 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.03% 00:10:24.931 lat (msec) : 2=0.26%, 4=5.02%, 10=89.97%, 20=4.68% 00:10:24.931 cpu : usr=6.88%, sys=22.96%, ctx=5794, majf=0, minf=108 00:10:24.931 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:24.931 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.931 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.931 issued rwts: total=66399,35575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.931 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.931 00:10:24.931 Run status group 0 (all jobs): 00:10:24.931 READ: bw=43.2MiB/s (45.3MB/s), 43.2MiB/s-43.2MiB/s (45.3MB/s-45.3MB/s), io=259MiB (272MB), run=6002-6002msec 00:10:24.931 WRITE: bw=26.0MiB/s (27.2MB/s), 26.0MiB/s-26.0MiB/s (27.2MB/s-27.2MB/s), io=139MiB (146MB), run=5351-5351msec 00:10:24.931 00:10:24.931 Disk stats (read/write): 00:10:24.931 nvme0n1: ios=65657/34924, merge=0/0, ticks=495070/217038, in_queue=712108, util=98.70% 00:10:24.931 02:55:03 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:24.931 02:55:03 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.931 02:55:03 -- common/autotest_common.sh@1205 -- # local i=0 00:10:24.931 02:55:03 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:24.931 02:55:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.931 02:55:03 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:24.931 02:55:03 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.931 02:55:03 -- common/autotest_common.sh@1217 -- # return 0 00:10:24.931 02:55:03 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.931 02:55:03 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:24.931 02:55:03 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:24.931 02:55:03 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:24.931 02:55:03 -- target/multipath.sh@144 -- # nvmftestfini 00:10:24.931 02:55:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:24.931 02:55:03 -- nvmf/common.sh@117 -- # sync 00:10:24.931 02:55:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:24.931 02:55:03 -- nvmf/common.sh@120 -- # set +e 00:10:24.931 02:55:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:24.931 02:55:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:24.931 rmmod nvme_tcp 00:10:24.931 rmmod nvme_fabrics 00:10:24.931 rmmod nvme_keyring 00:10:24.931 02:55:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:24.931 02:55:03 -- nvmf/common.sh@124 -- # set -e 00:10:24.931 02:55:03 -- nvmf/common.sh@125 -- # return 0 00:10:24.931 02:55:03 -- nvmf/common.sh@478 -- # '[' -n 80652 ']' 00:10:24.931 02:55:03 -- nvmf/common.sh@479 -- # killprocess 80652 00:10:24.931 02:55:03 -- common/autotest_common.sh@936 -- # '[' -z 80652 ']' 00:10:24.931 02:55:03 -- common/autotest_common.sh@940 -- # kill -0 80652 00:10:24.931 02:55:03 -- common/autotest_common.sh@941 -- # uname 00:10:24.931 02:55:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:24.931 02:55:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80652 00:10:24.931 02:55:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:24.931 02:55:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:24.931 killing process with pid 80652 00:10:24.931 02:55:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80652' 00:10:24.931 02:55:03 -- common/autotest_common.sh@955 -- # kill 80652 00:10:24.931 02:55:03 -- common/autotest_common.sh@960 -- # wait 80652 00:10:24.931 02:55:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:24.931 02:55:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:24.931 02:55:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:24.931 02:55:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.931 02:55:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.931 02:55:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.931 02:55:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.931 02:55:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.931 02:55:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:24.931 00:10:24.931 real 0m18.682s 00:10:24.931 user 1m9.949s 00:10:24.931 sys 0m9.842s 00:10:24.931 02:55:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:24.931 02:55:03 -- common/autotest_common.sh@10 -- # set +x 00:10:24.931 ************************************ 00:10:24.931 END TEST nvmf_multipath 00:10:24.931 ************************************ 00:10:24.931 02:55:03 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:24.931 02:55:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:24.931 02:55:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.931 02:55:03 -- common/autotest_common.sh@10 -- # set +x 00:10:24.931 ************************************ 00:10:24.931 START TEST nvmf_zcopy 00:10:24.931 ************************************ 00:10:24.931 02:55:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:24.931 * Looking for test storage... 00:10:24.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:24.931 02:55:03 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:24.931 02:55:03 -- nvmf/common.sh@7 -- # uname -s 00:10:24.932 02:55:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.932 02:55:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.932 02:55:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.932 02:55:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.932 02:55:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.932 02:55:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.932 02:55:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.932 02:55:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.932 02:55:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.932 02:55:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:24.932 02:55:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:24.932 02:55:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.932 02:55:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.932 02:55:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:24.932 02:55:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.932 02:55:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:24.932 02:55:03 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.932 02:55:03 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.932 02:55:03 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.932 02:55:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.932 02:55:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.932 02:55:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.932 02:55:03 -- paths/export.sh@5 -- # export PATH 00:10:24.932 02:55:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.932 02:55:03 -- nvmf/common.sh@47 -- # : 0 00:10:24.932 02:55:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.932 02:55:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.932 02:55:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.932 02:55:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.932 02:55:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.932 02:55:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.932 02:55:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.932 02:55:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.932 02:55:03 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:24.932 02:55:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:24.932 02:55:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.932 02:55:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:24.932 02:55:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:24.932 02:55:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:24.932 02:55:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.932 02:55:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.932 02:55:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.932 02:55:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:24.932 02:55:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:24.932 02:55:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.932 02:55:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.932 02:55:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:24.932 02:55:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:24.932 02:55:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:24.932 02:55:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:24.932 02:55:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:24.932 02:55:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.932 02:55:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:24.932 02:55:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:24.932 02:55:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:24.932 02:55:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:24.932 02:55:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:24.932 02:55:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:24.932 Cannot find device "nvmf_tgt_br" 00:10:24.932 02:55:03 -- nvmf/common.sh@155 -- # true 00:10:24.932 02:55:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:24.932 Cannot find device "nvmf_tgt_br2" 00:10:24.932 02:55:03 -- nvmf/common.sh@156 -- # true 00:10:24.932 02:55:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:24.932 02:55:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:24.932 Cannot find device "nvmf_tgt_br" 00:10:24.932 02:55:03 -- nvmf/common.sh@158 -- # true 00:10:24.932 02:55:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:24.932 Cannot find device "nvmf_tgt_br2" 00:10:24.932 02:55:03 -- nvmf/common.sh@159 -- # true 00:10:24.932 02:55:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:24.932 02:55:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:24.932 02:55:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:24.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.932 02:55:04 -- nvmf/common.sh@162 -- # true 00:10:24.932 02:55:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:24.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:24.932 02:55:04 -- nvmf/common.sh@163 -- # true 00:10:24.932 02:55:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:24.932 02:55:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:24.932 02:55:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:24.932 02:55:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:24.932 02:55:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:24.932 02:55:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:24.932 02:55:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:24.932 02:55:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:24.932 02:55:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:25.191 02:55:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:25.191 02:55:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:25.191 02:55:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:25.191 02:55:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:25.191 02:55:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:25.191 02:55:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:25.191 02:55:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:25.191 02:55:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:25.191 02:55:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:25.191 02:55:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:25.191 02:55:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:25.191 02:55:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:25.191 02:55:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:25.191 02:55:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:25.191 02:55:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:25.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:10:25.191 00:10:25.191 --- 10.0.0.2 ping statistics --- 00:10:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.191 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:10:25.191 02:55:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:25.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:25.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:10:25.191 00:10:25.191 --- 10.0.0.3 ping statistics --- 00:10:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.191 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:25.191 02:55:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:25.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:25.191 00:10:25.191 --- 10.0.0.1 ping statistics --- 00:10:25.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.191 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:25.191 02:55:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.191 02:55:04 -- nvmf/common.sh@422 -- # return 0 00:10:25.191 02:55:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:25.191 02:55:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.191 02:55:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:25.191 02:55:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:25.191 02:55:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.191 02:55:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:25.191 02:55:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:25.191 02:55:04 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:25.191 02:55:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:25.191 02:55:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:25.191 02:55:04 -- common/autotest_common.sh@10 -- # set +x 00:10:25.191 02:55:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:25.191 02:55:04 -- nvmf/common.sh@470 -- # nvmfpid=81118 00:10:25.191 02:55:04 -- nvmf/common.sh@471 -- # waitforlisten 81118 00:10:25.191 02:55:04 -- common/autotest_common.sh@817 -- # '[' -z 81118 ']' 00:10:25.191 02:55:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.191 02:55:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:25.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.191 02:55:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.191 02:55:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:25.191 02:55:04 -- common/autotest_common.sh@10 -- # set +x 00:10:25.191 [2024-04-23 02:55:04.280294] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:25.191 [2024-04-23 02:55:04.280394] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.449 [2024-04-23 02:55:04.402350] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:25.449 [2024-04-23 02:55:04.421774] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.449 [2024-04-23 02:55:04.462220] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.449 [2024-04-23 02:55:04.462270] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.449 [2024-04-23 02:55:04.462284] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.449 [2024-04-23 02:55:04.462297] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.449 [2024-04-23 02:55:04.462306] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.449 [2024-04-23 02:55:04.462345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:26.016 02:55:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:26.016 02:55:05 -- common/autotest_common.sh@850 -- # return 0 00:10:26.016 02:55:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:26.016 02:55:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:26.016 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 02:55:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.275 02:55:05 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:26.275 02:55:05 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 [2024-04-23 02:55:05.191839] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 [2024-04-23 02:55:05.207929] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 malloc0 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:26.275 02:55:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:26.275 02:55:05 -- common/autotest_common.sh@10 -- # set +x 00:10:26.275 02:55:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:26.275 02:55:05 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:26.276 02:55:05 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:26.276 02:55:05 -- nvmf/common.sh@521 -- # config=() 00:10:26.276 02:55:05 -- nvmf/common.sh@521 -- # local subsystem config 00:10:26.276 02:55:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:26.276 02:55:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:26.276 { 00:10:26.276 "params": { 00:10:26.276 "name": "Nvme$subsystem", 00:10:26.276 "trtype": "$TEST_TRANSPORT", 00:10:26.276 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.276 "adrfam": "ipv4", 00:10:26.276 "trsvcid": "$NVMF_PORT", 00:10:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.276 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.276 "hdgst": ${hdgst:-false}, 00:10:26.276 "ddgst": ${ddgst:-false} 00:10:26.276 }, 00:10:26.276 "method": "bdev_nvme_attach_controller" 00:10:26.276 } 00:10:26.276 EOF 00:10:26.276 )") 00:10:26.276 02:55:05 -- nvmf/common.sh@543 -- # cat 00:10:26.276 02:55:05 -- nvmf/common.sh@545 -- # jq . 00:10:26.276 02:55:05 -- nvmf/common.sh@546 -- # IFS=, 00:10:26.276 02:55:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:26.276 "params": { 00:10:26.276 "name": "Nvme1", 00:10:26.276 "trtype": "tcp", 00:10:26.276 "traddr": "10.0.0.2", 00:10:26.276 "adrfam": "ipv4", 00:10:26.276 "trsvcid": "4420", 00:10:26.276 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:26.276 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:26.276 "hdgst": false, 00:10:26.276 "ddgst": false 00:10:26.276 }, 00:10:26.276 "method": "bdev_nvme_attach_controller" 00:10:26.276 }' 00:10:26.276 [2024-04-23 02:55:05.296078] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:26.276 [2024-04-23 02:55:05.296196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81151 ] 00:10:26.276 [2024-04-23 02:55:05.418026] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:26.535 [2024-04-23 02:55:05.436403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.535 [2024-04-23 02:55:05.474878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:26.535 Running I/O for 10 seconds... 00:10:36.517 00:10:36.517 Latency(us) 00:10:36.517 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.517 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:36.517 Verification LBA range: start 0x0 length 0x1000 00:10:36.517 Nvme1n1 : 10.02 6284.57 49.10 0.00 0.00 20304.47 2338.44 33363.78 00:10:36.517 =================================================================================================================== 00:10:36.517 Total : 6284.57 49.10 0.00 0.00 20304.47 2338.44 33363.78 00:10:36.776 02:55:15 -- target/zcopy.sh@39 -- # perfpid=81267 00:10:36.776 02:55:15 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:36.776 02:55:15 -- common/autotest_common.sh@10 -- # set +x 00:10:36.776 02:55:15 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:36.776 02:55:15 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:36.776 02:55:15 -- nvmf/common.sh@521 -- # config=() 00:10:36.776 02:55:15 -- nvmf/common.sh@521 -- # local subsystem config 00:10:36.776 02:55:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:10:36.776 02:55:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:10:36.776 { 00:10:36.776 "params": { 00:10:36.776 "name": "Nvme$subsystem", 00:10:36.776 "trtype": "$TEST_TRANSPORT", 00:10:36.776 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:36.776 "adrfam": "ipv4", 00:10:36.776 "trsvcid": "$NVMF_PORT", 00:10:36.776 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:36.776 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:36.776 "hdgst": ${hdgst:-false}, 00:10:36.776 "ddgst": ${ddgst:-false} 00:10:36.776 }, 00:10:36.776 "method": "bdev_nvme_attach_controller" 00:10:36.776 } 00:10:36.776 EOF 00:10:36.776 )") 00:10:36.776 02:55:15 -- nvmf/common.sh@543 -- # cat 00:10:36.776 02:55:15 -- nvmf/common.sh@545 -- # jq . 00:10:36.776 [2024-04-23 02:55:15.771428] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.771477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 02:55:15 -- nvmf/common.sh@546 -- # IFS=, 00:10:36.776 02:55:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:10:36.776 "params": { 00:10:36.776 "name": "Nvme1", 00:10:36.776 "trtype": "tcp", 00:10:36.776 "traddr": "10.0.0.2", 00:10:36.776 "adrfam": "ipv4", 00:10:36.776 "trsvcid": "4420", 00:10:36.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:36.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:36.776 "hdgst": false, 00:10:36.776 "ddgst": false 00:10:36.776 }, 00:10:36.776 "method": "bdev_nvme_attach_controller" 00:10:36.776 }' 00:10:36.776 [2024-04-23 02:55:15.783369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.783415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 [2024-04-23 02:55:15.795375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.795416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 [2024-04-23 02:55:15.807363] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.807404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 [2024-04-23 02:55:15.818487] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:36.776 [2024-04-23 02:55:15.818603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81267 ] 00:10:36.776 [2024-04-23 02:55:15.819382] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.819404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 [2024-04-23 02:55:15.831369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.831409] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.776 [2024-04-23 02:55:15.843369] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.776 [2024-04-23 02:55:15.843410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.855378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.855420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.867391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.867435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.879400] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.879425] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.891384] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.891427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.903410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.903451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.915396] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.915436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.777 [2024-04-23 02:55:15.927420] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.777 [2024-04-23 02:55:15.927448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.939402] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.939444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.941903] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:37.036 [2024-04-23 02:55:15.951406] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.951448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.958036] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:37.036 [2024-04-23 02:55:15.959408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.959449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.971477] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.971536] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.979433] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.979485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.987426] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.987472] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:15.991028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.036 [2024-04-23 02:55:15.995424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:15.995482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.003428] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.003471] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.011449] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.011518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.023483] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.023538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.031437] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.031482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.039467] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.039518] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.047435] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.047491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.055473] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.055521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.067464] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.067524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.075487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.075533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.083499] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.083528] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.091504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.091549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.099505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.099550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.111526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.111574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.119506] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.119549] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 Running I/O for 5 seconds... 00:10:37.036 [2024-04-23 02:55:16.127519] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.127562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.140485] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.140534] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.036 [2024-04-23 02:55:16.151084] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.036 [2024-04-23 02:55:16.151143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.037 [2024-04-23 02:55:16.165307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.037 [2024-04-23 02:55:16.165356] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.037 [2024-04-23 02:55:16.174485] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.037 [2024-04-23 02:55:16.174550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.037 [2024-04-23 02:55:16.188891] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.037 [2024-04-23 02:55:16.188945] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.199367] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.199416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.210761] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.210810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.221636] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.221673] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.232076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.232154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.243856] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.243904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.252700] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.252748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.269043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.269094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.279308] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.279359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.294236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.294266] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.311314] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.311348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.321512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.296 [2024-04-23 02:55:16.321560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.296 [2024-04-23 02:55:16.336564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.336596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.352233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.352265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.361706] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.361739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.373712] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.373766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.383917] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.383966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.394328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.394376] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.408904] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.408951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.425786] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.425837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.297 [2024-04-23 02:55:16.443327] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.297 [2024-04-23 02:55:16.443375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.458590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.458638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.476586] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.476635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.490973] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.491023] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.508082] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.508139] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.518372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.518421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.532545] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.532593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.549943] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.549992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.559798] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.559846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.570670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.570718] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.587786] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.587835] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.603819] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.603868] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.615222] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.615271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.632262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.632310] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.642392] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.642441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.655930] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.655978] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.665186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.665235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.679622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.556 [2024-04-23 02:55:16.679672] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.556 [2024-04-23 02:55:16.689239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.557 [2024-04-23 02:55:16.689288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.557 [2024-04-23 02:55:16.704899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.557 [2024-04-23 02:55:16.704948] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.715020] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.715068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.726646] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.726695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.737398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.737448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.747879] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.747927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.759766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.759814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.775641] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.775690] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.784810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.784858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.799713] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.799761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.809925] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.809962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.825320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.825370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.837494] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.837562] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.846797] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.846846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.858251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.858301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.868518] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.868567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.878591] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.878640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.889470] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.889530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.902653] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.902701] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.918553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.918601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.936808] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.936857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.950927] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.950976] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.816 [2024-04-23 02:55:16.967838] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.816 [2024-04-23 02:55:16.967886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.075 [2024-04-23 02:55:16.979092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.075 [2024-04-23 02:55:16.979138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.075 [2024-04-23 02:55:16.992584] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.075 [2024-04-23 02:55:16.992633] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.075 [2024-04-23 02:55:17.002450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.002515] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.014409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.014474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.025563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.025600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.038827] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.038879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.054601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.054650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.064173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.064221] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.075249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.075299] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.087236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.087288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.102285] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.102333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.119554] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.119603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.129919] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.129967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.143889] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.143939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.153573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.153623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.168124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.168200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.177629] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.177679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.191952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.192000] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.201440] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.201478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.215228] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.215287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.076 [2024-04-23 02:55:17.225320] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.076 [2024-04-23 02:55:17.225370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.240142] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.240190] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.249948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.249996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.263709] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.263758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.273033] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.273082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.287000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.287049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.297319] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.297369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.311886] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.311934] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.329073] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.329123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.338999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.339049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.354046] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.354095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.363938] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.363987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.375429] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.375509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.387231] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.387263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.405379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.405413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.422480] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.422512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.438510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.438542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.448011] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.448060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.459108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.459166] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.471037] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.471086] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.479802] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.479851] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.335 [2024-04-23 02:55:17.491218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.335 [2024-04-23 02:55:17.491278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.501737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.501802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.512171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.512220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.523852] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.523904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.534510] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.534560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.551910] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.551958] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.567608] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.567656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.576919] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.576968] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.592646] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.592694] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.602065] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.602113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.612951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.612999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.623539] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.623588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.634326] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.634387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.646835] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.646885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.665104] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.665180] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.679526] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.679575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.689183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.689242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.700077] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.700152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.710383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.710430] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.721075] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.721152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.733497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.733564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.594 [2024-04-23 02:55:17.745173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.594 [2024-04-23 02:55:17.745240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.853 [2024-04-23 02:55:17.755184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.755262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.766655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.766703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.777035] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.777084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.787684] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.787733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.800156] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.800218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.809441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.809497] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.822615] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.822665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.838559] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.838608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.848668] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.848716] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.864192] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.864241] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.876562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.876626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.886030] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.886080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.897563] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.897598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.907699] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.907749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.922567] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.922617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.940151] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.940229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.950547] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.950596] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.961748] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.961783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.974461] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.974525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:17.991601] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:17.991650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:38.854 [2024-04-23 02:55:18.008231] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:38.854 [2024-04-23 02:55:18.008300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.018580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.018629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.032716] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.032767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.049153] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.049217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.059475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.059513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.071513] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.071564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.083017] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.083069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.099165] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.099249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.115353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.115403] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.133292] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.133343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.148971] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.149021] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.158361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.158412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.170240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.170300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.185372] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.185420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.200587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.200636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.209317] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.209367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.225400] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.225450] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.235103] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.235179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.250082] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.250155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.113 [2024-04-23 02:55:18.265622] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.113 [2024-04-23 02:55:18.265674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.276301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.276337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.287757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.287793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.298523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.298574] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.310353] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.310402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.321617] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.321653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.334051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.334102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.344954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.345004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.357528] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.357564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.372112] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.372181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.384869] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.384918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.394304] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.394353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.405623] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.405674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.416132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.416224] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.427405] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.427455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.438260] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.438309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.449377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.449412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.464836] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.464869] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.475707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.475739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.487756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.487789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.502796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.502829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.513631] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.513665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.383 [2024-04-23 02:55:18.528880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.383 [2024-04-23 02:55:18.528918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.546297] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.546349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.555969] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.556018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.570147] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.570238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.580123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.580218] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.595380] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.595429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.604841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.604890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.620258] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.620308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.629378] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.629429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.642763] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.642813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.657221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.657270] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.674498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.674550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.654 [2024-04-23 02:55:18.689587] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.654 [2024-04-23 02:55:18.689640] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.699570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.699621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.710850] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.710899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.728518] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.728568] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.746374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.746410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.761383] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.761432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.770386] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.770435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.783462] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.783526] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.794321] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.794369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.655 [2024-04-23 02:55:18.805123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.655 [2024-04-23 02:55:18.805199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.912 [2024-04-23 02:55:18.818498] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.912 [2024-04-23 02:55:18.818548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.912 [2024-04-23 02:55:18.828088] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.912 [2024-04-23 02:55:18.828162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.912 [2024-04-23 02:55:18.843504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.843538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.854558] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.854607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.869987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.870036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.880407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.880458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.891188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.891238] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.904076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.904152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.923316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.923365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.938119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.938197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.947444] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.947493] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.963368] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.963416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.980767] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.980817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:18.991108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:18.991186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:19.006090] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:19.006163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:19.016022] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:19.016070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:19.031204] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:19.031254] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:19.041121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:19.041213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:39.913 [2024-04-23 02:55:19.055407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:39.913 [2024-04-23 02:55:19.055457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.072933] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.072983] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.083207] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.083256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.097728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.097783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.113548] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.113614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.123066] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.123115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.135535] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.135588] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.148058] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.148092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.157414] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.157463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.170613] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.170676] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.181590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.181626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.196902] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.196957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.207697] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.207730] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.222186] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.222265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.231810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.231859] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.247132] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.247209] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.256931] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.256980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.271729] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.271780] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.281721] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.170 [2024-04-23 02:55:19.281773] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.170 [2024-04-23 02:55:19.292890] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.171 [2024-04-23 02:55:19.292939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.171 [2024-04-23 02:55:19.309846] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.171 [2024-04-23 02:55:19.309895] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.171 [2024-04-23 02:55:19.319079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.171 [2024-04-23 02:55:19.319153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.333262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.333313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.344432] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.344482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.361505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.361538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.372433] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.372469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.384676] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.384727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.395744] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.395796] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.412160] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.412222] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.422436] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.422486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.437318] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.437369] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.456221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.456271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.470914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.470964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.480673] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.480723] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.492273] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.492322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.509361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.509412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.526079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.526116] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.541910] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.541942] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.551760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.551792] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.566866] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.566899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.428 [2024-04-23 02:55:19.578096] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.428 [2024-04-23 02:55:19.578157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.589801] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.589865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.600953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.600985] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.617018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.617071] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.627787] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.627837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.640950] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.640999] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.650652] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.650704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.664945] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.686 [2024-04-23 02:55:19.664994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.686 [2024-04-23 02:55:19.681986] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.682037] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.691698] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.691746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.703505] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.703554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.714573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.714625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.727080] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.727155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.736547] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.736597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.748000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.748049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.759281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.759330] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.770207] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.770267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.780956] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.781005] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.797696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.797746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.815287] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.815337] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.830005] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.830056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.687 [2024-04-23 02:55:19.839441] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.687 [2024-04-23 02:55:19.839475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.852105] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.852165] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.867745] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.867795] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.883512] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.883576] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.892663] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.892713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.905427] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.905476] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.915750] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.915800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.930277] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.930325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.940256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.940307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.954974] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.955007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.965092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.965167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.980457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.980522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:19.996855] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:19.996904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.007895] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.007962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.023331] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.023381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.040107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.040195] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.049553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.049590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.065411] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.065459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.074448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.074513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.086301] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.086349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:40.946 [2024-04-23 02:55:20.098310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:40.946 [2024-04-23 02:55:20.098359] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.113937] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.113986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.131305] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.131354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.141046] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.141094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.155199] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.155263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.164766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.164814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.178764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.178813] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.188457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.188521] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.203742] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.203791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.213449] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.213516] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.228749] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.228799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.238914] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.238965] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.254076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.254156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.264905] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.264953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.280226] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.280276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.289841] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.289891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.304993] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.305041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.314780] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.314829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.329333] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.329382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.339198] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.339248] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.205 [2024-04-23 02:55:20.353037] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.205 [2024-04-23 02:55:20.353085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.363654] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.363687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.378877] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.378908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.395459] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.395525] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.405256] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.405305] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.420791] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.420840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.439316] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.439364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.449431] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.449481] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.460450] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.460483] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.473600] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.473652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.489451] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.489507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.507572] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.507620] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.518307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.518355] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.464 [2024-04-23 02:55:20.530859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.464 [2024-04-23 02:55:20.530909] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.540659] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.540707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.555463] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.555512] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.564637] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.564685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.580831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.580879] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.590326] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.590374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.601689] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.601740] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.465 [2024-04-23 02:55:20.618734] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.465 [2024-04-23 02:55:20.618782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.635484] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.635531] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.644796] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.644827] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.660793] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.660822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.677719] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.677753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.687720] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.687768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.699045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.699094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.713300] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.713348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.723 [2024-04-23 02:55:20.730707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.723 [2024-04-23 02:55:20.730755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.744760] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.744808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.760274] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.760322] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.769592] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.769628] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.785289] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.785338] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.794726] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.794775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.805756] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.805823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.816370] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.816419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.830631] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.830678] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.839660] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.839709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.854307] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.854354] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.863544] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.863594] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.724 [2024-04-23 02:55:20.879609] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.724 [2024-04-23 02:55:20.879643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.890163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.890210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.905940] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.905990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.921168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.921216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.930274] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.930323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.944921] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.944969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.955580] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.955629] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.969458] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.969545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.978954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.979003] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:20.993328] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:20.993378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.002830] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.002878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.017274] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.017324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.026810] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.026858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.040475] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.040524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.050645] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.050693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.065265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.065313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.076899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.076949] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.085750] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.085831] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.098814] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.098862] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.109632] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.109684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 [2024-04-23 02:55:21.122176] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.122235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:41.982 00:10:41.982 Latency(us) 00:10:41.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:41.982 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:41.982 Nvme1n1 : 5.01 11707.94 91.47 0.00 0.00 10921.32 4617.31 24307.90 00:10:41.982 =================================================================================================================== 00:10:41.982 Total : 11707.94 91.47 0.00 0.00 10921.32 4617.31 24307.90 00:10:41.982 [2024-04-23 02:55:21.134381] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:41.982 [2024-04-23 02:55:21.134429] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.240 [2024-04-23 02:55:21.146374] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.240 [2024-04-23 02:55:21.146421] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.240 [2024-04-23 02:55:21.154416] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.240 [2024-04-23 02:55:21.154474] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.240 [2024-04-23 02:55:21.166412] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.166488] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.174407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.174485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.186424] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.186505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.198421] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.198498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.210408] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.210459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.218412] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.218464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.230417] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.230487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.238404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.238449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.250423] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.250490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.258404] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.258447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 [2024-04-23 02:55:21.266407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:42.241 [2024-04-23 02:55:21.266447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:42.241 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (81267) - No such process 00:10:42.241 02:55:21 -- target/zcopy.sh@49 -- # wait 81267 00:10:42.241 02:55:21 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.241 02:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.241 02:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:42.241 02:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.241 02:55:21 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:42.241 02:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.241 02:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:42.241 delay0 00:10:42.241 02:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.241 02:55:21 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:42.241 02:55:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:42.241 02:55:21 -- common/autotest_common.sh@10 -- # set +x 00:10:42.241 02:55:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:42.241 02:55:21 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:42.499 [2024-04-23 02:55:21.463448] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:49.065 Initializing NVMe Controllers 00:10:49.065 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:49.065 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:49.065 Initialization complete. Launching workers. 00:10:49.065 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 69 00:10:49.065 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 356, failed to submit 33 00:10:49.065 success 225, unsuccess 131, failed 0 00:10:49.065 02:55:27 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:49.065 02:55:27 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:49.065 02:55:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:49.065 02:55:27 -- nvmf/common.sh@117 -- # sync 00:10:49.065 02:55:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.065 02:55:27 -- nvmf/common.sh@120 -- # set +e 00:10:49.065 02:55:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.065 02:55:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.065 rmmod nvme_tcp 00:10:49.065 rmmod nvme_fabrics 00:10:49.065 rmmod nvme_keyring 00:10:49.065 02:55:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.065 02:55:27 -- nvmf/common.sh@124 -- # set -e 00:10:49.065 02:55:27 -- nvmf/common.sh@125 -- # return 0 00:10:49.065 02:55:27 -- nvmf/common.sh@478 -- # '[' -n 81118 ']' 00:10:49.065 02:55:27 -- nvmf/common.sh@479 -- # killprocess 81118 00:10:49.065 02:55:27 -- common/autotest_common.sh@936 -- # '[' -z 81118 ']' 00:10:49.065 02:55:27 -- common/autotest_common.sh@940 -- # kill -0 81118 00:10:49.065 02:55:27 -- common/autotest_common.sh@941 -- # uname 00:10:49.065 02:55:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:49.065 02:55:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81118 00:10:49.065 02:55:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:10:49.065 02:55:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:10:49.065 killing process with pid 81118 00:10:49.065 02:55:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81118' 00:10:49.065 02:55:27 -- common/autotest_common.sh@955 -- # kill 81118 00:10:49.066 02:55:27 -- common/autotest_common.sh@960 -- # wait 81118 00:10:49.066 02:55:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:49.066 02:55:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:49.066 02:55:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:49.066 02:55:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.066 02:55:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.066 02:55:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.066 02:55:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.066 02:55:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.066 02:55:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:49.066 00:10:49.066 real 0m24.063s 00:10:49.066 user 0m39.374s 00:10:49.066 sys 0m6.661s 00:10:49.066 02:55:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:49.066 02:55:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.066 ************************************ 00:10:49.066 END TEST nvmf_zcopy 00:10:49.066 ************************************ 00:10:49.066 02:55:27 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.066 02:55:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:49.066 02:55:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.066 02:55:27 -- common/autotest_common.sh@10 -- # set +x 00:10:49.066 ************************************ 00:10:49.066 START TEST nvmf_nmic 00:10:49.066 ************************************ 00:10:49.066 02:55:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:49.066 * Looking for test storage... 00:10:49.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:49.066 02:55:28 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.066 02:55:28 -- nvmf/common.sh@7 -- # uname -s 00:10:49.066 02:55:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.066 02:55:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.066 02:55:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.066 02:55:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.066 02:55:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.066 02:55:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.066 02:55:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.066 02:55:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.066 02:55:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.066 02:55:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:49.066 02:55:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:49.066 02:55:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.066 02:55:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.066 02:55:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.066 02:55:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.066 02:55:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.066 02:55:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.066 02:55:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.066 02:55:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.066 02:55:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.066 02:55:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.066 02:55:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.066 02:55:28 -- paths/export.sh@5 -- # export PATH 00:10:49.066 02:55:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.066 02:55:28 -- nvmf/common.sh@47 -- # : 0 00:10:49.066 02:55:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:49.066 02:55:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:49.066 02:55:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.066 02:55:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.066 02:55:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.066 02:55:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:49.066 02:55:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:49.066 02:55:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:49.066 02:55:28 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.066 02:55:28 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.066 02:55:28 -- target/nmic.sh@14 -- # nvmftestinit 00:10:49.066 02:55:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:49.066 02:55:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.066 02:55:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:49.066 02:55:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:49.066 02:55:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:49.066 02:55:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.066 02:55:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.066 02:55:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.066 02:55:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:49.066 02:55:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:49.066 02:55:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.066 02:55:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.066 02:55:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:49.066 02:55:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:49.066 02:55:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.066 02:55:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.066 02:55:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.066 02:55:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.066 02:55:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.066 02:55:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.066 02:55:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.066 02:55:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.066 02:55:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:49.066 02:55:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:49.066 Cannot find device "nvmf_tgt_br" 00:10:49.066 02:55:28 -- nvmf/common.sh@155 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.066 Cannot find device "nvmf_tgt_br2" 00:10:49.066 02:55:28 -- nvmf/common.sh@156 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:49.066 02:55:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:49.066 Cannot find device "nvmf_tgt_br" 00:10:49.066 02:55:28 -- nvmf/common.sh@158 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:49.066 Cannot find device "nvmf_tgt_br2" 00:10:49.066 02:55:28 -- nvmf/common.sh@159 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:49.066 02:55:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:49.066 02:55:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.066 02:55:28 -- nvmf/common.sh@162 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.066 02:55:28 -- nvmf/common.sh@163 -- # true 00:10:49.066 02:55:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.066 02:55:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.066 02:55:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.066 02:55:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.326 02:55:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.326 02:55:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.326 02:55:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.326 02:55:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:49.326 02:55:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:49.326 02:55:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:49.326 02:55:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:49.326 02:55:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:49.326 02:55:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:49.326 02:55:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.326 02:55:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.326 02:55:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.326 02:55:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:49.326 02:55:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:49.326 02:55:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.326 02:55:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.326 02:55:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.326 02:55:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.326 02:55:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.326 02:55:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:49.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:49.326 00:10:49.326 --- 10.0.0.2 ping statistics --- 00:10:49.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.326 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:49.326 02:55:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:49.326 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.326 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:49.326 00:10:49.326 --- 10.0.0.3 ping statistics --- 00:10:49.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.326 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:49.326 02:55:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:10:49.326 00:10:49.326 --- 10.0.0.1 ping statistics --- 00:10:49.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.326 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:49.326 02:55:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.326 02:55:28 -- nvmf/common.sh@422 -- # return 0 00:10:49.326 02:55:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:49.326 02:55:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.326 02:55:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:49.326 02:55:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:49.326 02:55:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.326 02:55:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:49.326 02:55:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:49.326 02:55:28 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:49.326 02:55:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:49.326 02:55:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:49.326 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.326 02:55:28 -- nvmf/common.sh@470 -- # nvmfpid=81592 00:10:49.326 02:55:28 -- nvmf/common.sh@471 -- # waitforlisten 81592 00:10:49.326 02:55:28 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.326 02:55:28 -- common/autotest_common.sh@817 -- # '[' -z 81592 ']' 00:10:49.326 02:55:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.326 02:55:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.326 02:55:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.326 02:55:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.326 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.586 [2024-04-23 02:55:28.498464] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:49.586 [2024-04-23 02:55:28.498570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.586 [2024-04-23 02:55:28.621837] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:49.586 [2024-04-23 02:55:28.639403] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.586 [2024-04-23 02:55:28.672610] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.586 [2024-04-23 02:55:28.672681] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.586 [2024-04-23 02:55:28.672708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.586 [2024-04-23 02:55:28.672716] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.586 [2024-04-23 02:55:28.672723] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.586 [2024-04-23 02:55:28.672895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.586 [2024-04-23 02:55:28.673639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.586 [2024-04-23 02:55:28.673743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.586 [2024-04-23 02:55:28.673747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.846 02:55:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:49.846 02:55:28 -- common/autotest_common.sh@850 -- # return 0 00:10:49.846 02:55:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:49.846 02:55:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 02:55:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.846 02:55:28 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 [2024-04-23 02:55:28.788839] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 Malloc0 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 [2024-04-23 02:55:28.842868] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 test case1: single bdev can't be used in multiple subsystems 00:10:49.846 02:55:28 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:49.846 02:55:28 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@28 -- # nmic_status=0 00:10:49.846 02:55:28 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 [2024-04-23 02:55:28.866756] bdev.c:7988:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:49.846 [2024-04-23 02:55:28.866798] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:49.846 [2024-04-23 02:55:28.866811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:49.846 request: 00:10:49.846 { 00:10:49.846 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:49.846 "namespace": { 00:10:49.846 "bdev_name": "Malloc0", 00:10:49.846 "no_auto_visible": false 00:10:49.846 }, 00:10:49.846 "method": "nvmf_subsystem_add_ns", 00:10:49.846 "req_id": 1 00:10:49.846 } 00:10:49.846 Got JSON-RPC error response 00:10:49.846 response: 00:10:49.846 { 00:10:49.846 "code": -32602, 00:10:49.846 "message": "Invalid parameters" 00:10:49.846 } 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@29 -- # nmic_status=1 00:10:49.846 02:55:28 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:49.846 Adding namespace failed - expected result. 00:10:49.846 02:55:28 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:49.846 test case2: host connect to nvmf target in multiple paths 00:10:49.846 02:55:28 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:49.846 02:55:28 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:49.846 02:55:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:49.846 02:55:28 -- common/autotest_common.sh@10 -- # set +x 00:10:49.846 [2024-04-23 02:55:28.878836] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:49.846 02:55:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:49.846 02:55:28 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:50.105 02:55:29 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:50.105 02:55:29 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:50.105 02:55:29 -- common/autotest_common.sh@1184 -- # local i=0 00:10:50.105 02:55:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:50.105 02:55:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:50.105 02:55:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:52.011 02:55:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:52.011 02:55:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:52.011 02:55:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:52.011 02:55:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:52.011 02:55:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:52.011 02:55:31 -- common/autotest_common.sh@1194 -- # return 0 00:10:52.011 02:55:31 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:52.269 [global] 00:10:52.269 thread=1 00:10:52.269 invalidate=1 00:10:52.269 rw=write 00:10:52.269 time_based=1 00:10:52.269 runtime=1 00:10:52.269 ioengine=libaio 00:10:52.269 direct=1 00:10:52.269 bs=4096 00:10:52.269 iodepth=1 00:10:52.269 norandommap=0 00:10:52.269 numjobs=1 00:10:52.269 00:10:52.269 verify_dump=1 00:10:52.269 verify_backlog=512 00:10:52.269 verify_state_save=0 00:10:52.269 do_verify=1 00:10:52.269 verify=crc32c-intel 00:10:52.269 [job0] 00:10:52.269 filename=/dev/nvme0n1 00:10:52.269 Could not set queue depth (nvme0n1) 00:10:52.269 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.269 fio-3.35 00:10:52.269 Starting 1 thread 00:10:53.646 00:10:53.646 job0: (groupid=0, jobs=1): err= 0: pid=81671: Tue Apr 23 02:55:32 2024 00:10:53.646 read: IOPS=2943, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec) 00:10:53.646 slat (nsec): min=12520, max=53912, avg=14623.87, stdev=3224.03 00:10:53.646 clat (usec): min=141, max=339, avg=181.25, stdev=17.42 00:10:53.646 lat (usec): min=154, max=359, avg=195.88, stdev=17.73 00:10:53.646 clat percentiles (usec): 00:10:53.646 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:10:53.646 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:10:53.646 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 202], 95.00th=[ 210], 00:10:53.646 | 99.00th=[ 229], 99.50th=[ 235], 99.90th=[ 297], 99.95th=[ 338], 00:10:53.646 | 99.99th=[ 338] 00:10:53.646 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:53.646 slat (nsec): min=18625, max=92663, avg=22450.12, stdev=5443.80 00:10:53.646 clat (usec): min=84, max=684, avg=111.64, stdev=20.01 00:10:53.647 lat (usec): min=104, max=726, avg=134.09, stdev=21.78 00:10:53.647 clat percentiles (usec): 00:10:53.647 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 100], 00:10:53.647 | 30.00th=[ 103], 40.00th=[ 106], 50.00th=[ 109], 60.00th=[ 111], 00:10:53.647 | 70.00th=[ 114], 80.00th=[ 120], 90.00th=[ 131], 95.00th=[ 143], 00:10:53.647 | 99.00th=[ 180], 99.50th=[ 210], 99.90th=[ 233], 99.95th=[ 293], 00:10:53.647 | 99.99th=[ 685] 00:10:53.647 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:53.647 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:53.647 lat (usec) : 100=9.69%, 250=90.16%, 500=0.13%, 750=0.02% 00:10:53.647 cpu : usr=2.10%, sys=8.90%, ctx=6018, majf=0, minf=2 00:10:53.647 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:53.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:53.647 issued rwts: total=2946,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:53.647 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:53.647 00:10:53.647 Run status group 0 (all jobs): 00:10:53.647 READ: bw=11.5MiB/s (12.1MB/s), 11.5MiB/s-11.5MiB/s (12.1MB/s-12.1MB/s), io=11.5MiB (12.1MB), run=1001-1001msec 00:10:53.647 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:53.647 00:10:53.647 Disk stats (read/write): 00:10:53.647 nvme0n1: ios=2610/2873, merge=0/0, ticks=475/353, in_queue=828, util=91.38% 00:10:53.647 02:55:32 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:53.647 02:55:32 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.647 02:55:32 -- common/autotest_common.sh@1205 -- # local i=0 00:10:53.647 02:55:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.647 02:55:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:10:53.647 02:55:32 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:10:53.647 02:55:32 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.647 02:55:32 -- common/autotest_common.sh@1217 -- # return 0 00:10:53.647 02:55:32 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:53.647 02:55:32 -- target/nmic.sh@53 -- # nvmftestfini 00:10:53.647 02:55:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:53.647 02:55:32 -- nvmf/common.sh@117 -- # sync 00:10:53.647 02:55:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:53.647 02:55:32 -- nvmf/common.sh@120 -- # set +e 00:10:53.647 02:55:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.647 02:55:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:53.647 rmmod nvme_tcp 00:10:53.647 rmmod nvme_fabrics 00:10:53.647 rmmod nvme_keyring 00:10:53.647 02:55:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.647 02:55:32 -- nvmf/common.sh@124 -- # set -e 00:10:53.647 02:55:32 -- nvmf/common.sh@125 -- # return 0 00:10:53.647 02:55:32 -- nvmf/common.sh@478 -- # '[' -n 81592 ']' 00:10:53.647 02:55:32 -- nvmf/common.sh@479 -- # killprocess 81592 00:10:53.647 02:55:32 -- common/autotest_common.sh@936 -- # '[' -z 81592 ']' 00:10:53.647 02:55:32 -- common/autotest_common.sh@940 -- # kill -0 81592 00:10:53.647 02:55:32 -- common/autotest_common.sh@941 -- # uname 00:10:53.647 02:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:53.647 02:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81592 00:10:53.647 02:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:53.647 02:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:53.647 killing process with pid 81592 00:10:53.647 02:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81592' 00:10:53.647 02:55:32 -- common/autotest_common.sh@955 -- # kill 81592 00:10:53.647 02:55:32 -- common/autotest_common.sh@960 -- # wait 81592 00:10:53.906 02:55:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:53.906 02:55:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:53.906 02:55:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:53.906 02:55:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:53.906 02:55:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:53.906 02:55:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.906 02:55:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.906 02:55:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:53.906 02:55:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:10:53.906 00:10:53.906 real 0m4.881s 00:10:53.906 user 0m15.237s 00:10:53.906 sys 0m2.030s 00:10:53.906 02:55:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:53.906 02:55:32 -- common/autotest_common.sh@10 -- # set +x 00:10:53.906 ************************************ 00:10:53.906 END TEST nvmf_nmic 00:10:53.906 ************************************ 00:10:53.906 02:55:32 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.906 02:55:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:53.906 02:55:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:53.907 02:55:32 -- common/autotest_common.sh@10 -- # set +x 00:10:53.907 ************************************ 00:10:53.907 START TEST nvmf_fio_target 00:10:53.907 ************************************ 00:10:53.907 02:55:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:53.907 * Looking for test storage... 00:10:53.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:53.907 02:55:33 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:53.907 02:55:33 -- nvmf/common.sh@7 -- # uname -s 00:10:53.907 02:55:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.907 02:55:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.907 02:55:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.907 02:55:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.907 02:55:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:53.907 02:55:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:53.907 02:55:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.907 02:55:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:53.907 02:55:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.907 02:55:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:53.907 02:55:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:53.907 02:55:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:10:53.907 02:55:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.907 02:55:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:53.907 02:55:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:53.907 02:55:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.907 02:55:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:53.907 02:55:33 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.907 02:55:33 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.907 02:55:33 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.907 02:55:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 02:55:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 02:55:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 02:55:33 -- paths/export.sh@5 -- # export PATH 00:10:53.907 02:55:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.907 02:55:33 -- nvmf/common.sh@47 -- # : 0 00:10:53.907 02:55:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:53.907 02:55:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:53.907 02:55:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:53.907 02:55:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.907 02:55:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.907 02:55:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:53.907 02:55:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:53.907 02:55:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:53.907 02:55:33 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.907 02:55:33 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.907 02:55:33 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:53.907 02:55:33 -- target/fio.sh@16 -- # nvmftestinit 00:10:53.907 02:55:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:53.907 02:55:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.907 02:55:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:53.907 02:55:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:53.907 02:55:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:53.907 02:55:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:53.907 02:55:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:53.907 02:55:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.165 02:55:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:10:54.165 02:55:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:10:54.165 02:55:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:10:54.165 02:55:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:10:54.165 02:55:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:10:54.165 02:55:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:10:54.165 02:55:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.165 02:55:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.165 02:55:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:54.165 02:55:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:10:54.165 02:55:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:54.165 02:55:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:54.165 02:55:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:54.165 02:55:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.165 02:55:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:54.165 02:55:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:54.165 02:55:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:54.165 02:55:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:54.165 02:55:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:10:54.165 02:55:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:10:54.165 Cannot find device "nvmf_tgt_br" 00:10:54.165 02:55:33 -- nvmf/common.sh@155 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:10:54.165 Cannot find device "nvmf_tgt_br2" 00:10:54.165 02:55:33 -- nvmf/common.sh@156 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:10:54.165 02:55:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:10:54.165 Cannot find device "nvmf_tgt_br" 00:10:54.165 02:55:33 -- nvmf/common.sh@158 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:10:54.165 Cannot find device "nvmf_tgt_br2" 00:10:54.165 02:55:33 -- nvmf/common.sh@159 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:10:54.165 02:55:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:10:54.165 02:55:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:54.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.165 02:55:33 -- nvmf/common.sh@162 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:54.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:54.165 02:55:33 -- nvmf/common.sh@163 -- # true 00:10:54.165 02:55:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:10:54.165 02:55:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:54.165 02:55:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:54.166 02:55:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:54.166 02:55:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:54.166 02:55:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:54.166 02:55:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:54.166 02:55:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:54.166 02:55:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:54.166 02:55:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:10:54.166 02:55:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:10:54.166 02:55:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:10:54.166 02:55:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:10:54.166 02:55:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:54.166 02:55:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:54.166 02:55:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:54.166 02:55:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:10:54.166 02:55:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:10:54.424 02:55:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:10:54.424 02:55:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:54.424 02:55:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:54.424 02:55:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:54.424 02:55:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:54.424 02:55:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:10:54.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:10:54.424 00:10:54.424 --- 10.0.0.2 ping statistics --- 00:10:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.424 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:10:54.424 02:55:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:10:54.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:54.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:10:54.424 00:10:54.424 --- 10.0.0.3 ping statistics --- 00:10:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.424 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:10:54.424 02:55:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:54.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:10:54.424 00:10:54.424 --- 10.0.0.1 ping statistics --- 00:10:54.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.424 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:10:54.424 02:55:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.424 02:55:33 -- nvmf/common.sh@422 -- # return 0 00:10:54.424 02:55:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:54.424 02:55:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.424 02:55:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:54.424 02:55:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:54.424 02:55:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.424 02:55:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:54.424 02:55:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:54.424 02:55:33 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:54.424 02:55:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:54.424 02:55:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:54.424 02:55:33 -- common/autotest_common.sh@10 -- # set +x 00:10:54.424 02:55:33 -- nvmf/common.sh@470 -- # nvmfpid=81855 00:10:54.424 02:55:33 -- nvmf/common.sh@471 -- # waitforlisten 81855 00:10:54.424 02:55:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.424 02:55:33 -- common/autotest_common.sh@817 -- # '[' -z 81855 ']' 00:10:54.424 02:55:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.424 02:55:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:54.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.424 02:55:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.424 02:55:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:54.424 02:55:33 -- common/autotest_common.sh@10 -- # set +x 00:10:54.424 [2024-04-23 02:55:33.460225] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:10:54.424 [2024-04-23 02:55:33.460327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.683 [2024-04-23 02:55:33.584901] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:10:54.683 [2024-04-23 02:55:33.599849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.683 [2024-04-23 02:55:33.641083] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.683 [2024-04-23 02:55:33.641160] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.683 [2024-04-23 02:55:33.641175] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.683 [2024-04-23 02:55:33.641185] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.683 [2024-04-23 02:55:33.641194] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.683 [2024-04-23 02:55:33.641275] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.683 [2024-04-23 02:55:33.641339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.683 [2024-04-23 02:55:33.641471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.683 [2024-04-23 02:55:33.641479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.675 02:55:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:55.675 02:55:34 -- common/autotest_common.sh@850 -- # return 0 00:10:55.675 02:55:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:55.675 02:55:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:55.675 02:55:34 -- common/autotest_common.sh@10 -- # set +x 00:10:55.675 02:55:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.675 02:55:34 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:55.675 [2024-04-23 02:55:34.701368] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.675 02:55:34 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.958 02:55:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:55.959 02:55:35 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.218 02:55:35 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:56.218 02:55:35 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.477 02:55:35 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:56.477 02:55:35 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.736 02:55:35 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:56.736 02:55:35 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:56.995 02:55:36 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.253 02:55:36 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:57.253 02:55:36 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.511 02:55:36 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:57.511 02:55:36 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.769 02:55:36 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:57.769 02:55:36 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:58.028 02:55:37 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:58.286 02:55:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:58.286 02:55:37 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:58.544 02:55:37 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:58.544 02:55:37 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:58.802 02:55:37 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.060 [2024-04-23 02:55:38.128339] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.060 02:55:38 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:59.318 02:55:38 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:59.576 02:55:38 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:59.833 02:55:38 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:59.833 02:55:38 -- common/autotest_common.sh@1184 -- # local i=0 00:10:59.833 02:55:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:59.833 02:55:38 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:10:59.833 02:55:38 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:10:59.833 02:55:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:01.734 02:55:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:01.734 02:55:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:01.734 02:55:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:01.734 02:55:40 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:11:01.734 02:55:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:01.734 02:55:40 -- common/autotest_common.sh@1194 -- # return 0 00:11:01.734 02:55:40 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:01.734 [global] 00:11:01.734 thread=1 00:11:01.734 invalidate=1 00:11:01.734 rw=write 00:11:01.734 time_based=1 00:11:01.734 runtime=1 00:11:01.734 ioengine=libaio 00:11:01.734 direct=1 00:11:01.734 bs=4096 00:11:01.734 iodepth=1 00:11:01.734 norandommap=0 00:11:01.734 numjobs=1 00:11:01.734 00:11:01.734 verify_dump=1 00:11:01.734 verify_backlog=512 00:11:01.734 verify_state_save=0 00:11:01.734 do_verify=1 00:11:01.734 verify=crc32c-intel 00:11:01.734 [job0] 00:11:01.734 filename=/dev/nvme0n1 00:11:01.734 [job1] 00:11:01.734 filename=/dev/nvme0n2 00:11:01.734 [job2] 00:11:01.734 filename=/dev/nvme0n3 00:11:01.734 [job3] 00:11:01.734 filename=/dev/nvme0n4 00:11:01.993 Could not set queue depth (nvme0n1) 00:11:01.993 Could not set queue depth (nvme0n2) 00:11:01.993 Could not set queue depth (nvme0n3) 00:11:01.993 Could not set queue depth (nvme0n4) 00:11:01.993 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.993 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.993 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.993 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:01.993 fio-3.35 00:11:01.993 Starting 4 threads 00:11:03.367 00:11:03.367 job0: (groupid=0, jobs=1): err= 0: pid=82046: Tue Apr 23 02:55:42 2024 00:11:03.367 read: IOPS=1809, BW=7237KiB/s (7410kB/s)(7244KiB/1001msec) 00:11:03.367 slat (nsec): min=14752, max=44195, avg=18691.00, stdev=2816.37 00:11:03.367 clat (usec): min=175, max=493, avg=259.15, stdev=27.12 00:11:03.367 lat (usec): min=194, max=514, avg=277.84, stdev=27.48 00:11:03.367 clat percentiles (usec): 00:11:03.367 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:11:03.367 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:11:03.367 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:11:03.367 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 486], 99.95th=[ 494], 00:11:03.367 | 99.99th=[ 494] 00:11:03.367 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:03.367 slat (nsec): min=18987, max=93159, avg=29299.17, stdev=7394.01 00:11:03.367 clat (usec): min=94, max=2052, avg=209.16, stdev=64.91 00:11:03.367 lat (usec): min=126, max=2078, avg=238.46, stdev=67.36 00:11:03.367 clat percentiles (usec): 00:11:03.367 | 1.00th=[ 111], 5.00th=[ 127], 10.00th=[ 178], 20.00th=[ 184], 00:11:03.367 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 204], 00:11:03.367 | 70.00th=[ 212], 80.00th=[ 221], 90.00th=[ 281], 95.00th=[ 338], 00:11:03.367 | 99.00th=[ 363], 99.50th=[ 375], 99.90th=[ 396], 99.95th=[ 660], 00:11:03.367 | 99.99th=[ 2057] 00:11:03.367 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.367 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.367 lat (usec) : 100=0.10%, 250=66.47%, 500=33.38%, 750=0.03% 00:11:03.367 lat (msec) : 4=0.03% 00:11:03.367 cpu : usr=1.50%, sys=7.60%, ctx=3871, majf=0, minf=8 00:11:03.367 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.367 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.367 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.367 issued rwts: total=1811,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.367 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.367 job1: (groupid=0, jobs=1): err= 0: pid=82047: Tue Apr 23 02:55:42 2024 00:11:03.367 read: IOPS=1920, BW=7680KiB/s (7865kB/s)(7688KiB/1001msec) 00:11:03.367 slat (nsec): min=14577, max=68356, avg=18726.73, stdev=4431.07 00:11:03.367 clat (usec): min=155, max=550, avg=258.59, stdev=34.55 00:11:03.367 lat (usec): min=171, max=571, avg=277.32, stdev=34.73 00:11:03.367 clat percentiles (usec): 00:11:03.367 | 1.00th=[ 223], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:11:03.367 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:11:03.367 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:11:03.367 | 99.00th=[ 486], 99.50th=[ 510], 99.90th=[ 545], 99.95th=[ 553], 00:11:03.367 | 99.99th=[ 553] 00:11:03.367 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:03.367 slat (nsec): min=20073, max=88639, avg=26574.01, stdev=5544.83 00:11:03.367 clat (usec): min=98, max=573, avg=197.07, stdev=25.09 00:11:03.367 lat (usec): min=120, max=598, avg=223.64, stdev=25.55 00:11:03.367 clat percentiles (usec): 00:11:03.367 | 1.00th=[ 116], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 186], 00:11:03.368 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 200], 00:11:03.368 | 70.00th=[ 204], 80.00th=[ 210], 90.00th=[ 219], 95.00th=[ 229], 00:11:03.368 | 99.00th=[ 251], 99.50th=[ 277], 99.90th=[ 429], 99.95th=[ 461], 00:11:03.368 | 99.99th=[ 570] 00:11:03.368 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.368 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.368 lat (usec) : 100=0.03%, 250=71.61%, 500=27.98%, 750=0.38% 00:11:03.368 cpu : usr=1.80%, sys=7.20%, ctx=3970, majf=0, minf=13 00:11:03.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 issued rwts: total=1922,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.368 job2: (groupid=0, jobs=1): err= 0: pid=82048: Tue Apr 23 02:55:42 2024 00:11:03.368 read: IOPS=1906, BW=7624KiB/s (7807kB/s)(7632KiB/1001msec) 00:11:03.368 slat (nsec): min=12751, max=53183, avg=16503.62, stdev=2572.96 00:11:03.368 clat (usec): min=185, max=526, avg=266.40, stdev=33.19 00:11:03.368 lat (usec): min=206, max=545, avg=282.91, stdev=33.07 00:11:03.368 clat percentiles (usec): 00:11:03.368 | 1.00th=[ 233], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:11:03.368 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 258], 60.00th=[ 265], 00:11:03.368 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 297], 95.00th=[ 338], 00:11:03.368 | 99.00th=[ 400], 99.50th=[ 461], 99.90th=[ 486], 99.95th=[ 529], 00:11:03.368 | 99.99th=[ 529] 00:11:03.368 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:03.368 slat (nsec): min=18459, max=81251, avg=25818.46, stdev=5524.11 00:11:03.368 clat (usec): min=106, max=1825, avg=194.97, stdev=47.00 00:11:03.368 lat (usec): min=129, max=1851, avg=220.78, stdev=47.76 00:11:03.368 clat percentiles (usec): 00:11:03.368 | 1.00th=[ 114], 5.00th=[ 124], 10.00th=[ 141], 20.00th=[ 184], 00:11:03.368 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 202], 00:11:03.368 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 237], 00:11:03.368 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 367], 99.95th=[ 375], 00:11:03.368 | 99.99th=[ 1827] 00:11:03.368 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.368 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.368 lat (usec) : 250=67.62%, 500=32.33%, 750=0.03% 00:11:03.368 lat (msec) : 2=0.03% 00:11:03.368 cpu : usr=2.10%, sys=6.30%, ctx=3956, majf=0, minf=9 00:11:03.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 issued rwts: total=1908,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.368 job3: (groupid=0, jobs=1): err= 0: pid=82049: Tue Apr 23 02:55:42 2024 00:11:03.368 read: IOPS=1911, BW=7644KiB/s (7828kB/s)(7652KiB/1001msec) 00:11:03.368 slat (nsec): min=13242, max=35772, avg=16148.92, stdev=2265.81 00:11:03.368 clat (usec): min=162, max=1829, avg=260.82, stdev=45.47 00:11:03.368 lat (usec): min=178, max=1846, avg=276.97, stdev=45.74 00:11:03.368 clat percentiles (usec): 00:11:03.368 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 245], 00:11:03.368 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:11:03.368 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:11:03.368 | 99.00th=[ 388], 99.50th=[ 461], 99.90th=[ 783], 99.95th=[ 1827], 00:11:03.368 | 99.99th=[ 1827] 00:11:03.368 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:11:03.368 slat (usec): min=18, max=102, avg=25.81, stdev= 6.15 00:11:03.368 clat (usec): min=108, max=449, avg=199.94, stdev=26.77 00:11:03.368 lat (usec): min=138, max=469, avg=225.75, stdev=28.69 00:11:03.368 clat percentiles (usec): 00:11:03.368 | 1.00th=[ 135], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:11:03.368 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:11:03.368 | 70.00th=[ 206], 80.00th=[ 210], 90.00th=[ 221], 95.00th=[ 229], 00:11:03.368 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 420], 99.95th=[ 437], 00:11:03.368 | 99.99th=[ 449] 00:11:03.368 bw ( KiB/s): min= 8192, max= 8192, per=25.03%, avg=8192.00, stdev= 0.00, samples=1 00:11:03.368 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:03.368 lat (usec) : 250=67.08%, 500=32.82%, 750=0.05%, 1000=0.03% 00:11:03.368 lat (msec) : 2=0.03% 00:11:03.368 cpu : usr=2.50%, sys=5.70%, ctx=3962, majf=0, minf=7 00:11:03.368 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:03.368 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.368 issued rwts: total=1913,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.368 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:03.368 00:11:03.368 Run status group 0 (all jobs): 00:11:03.368 READ: bw=29.5MiB/s (30.9MB/s), 7237KiB/s-7680KiB/s (7410kB/s-7865kB/s), io=29.5MiB (30.9MB), run=1001-1001msec 00:11:03.368 WRITE: bw=32.0MiB/s (33.5MB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1001msec 00:11:03.368 00:11:03.368 Disk stats (read/write): 00:11:03.368 nvme0n1: ios=1586/1796, merge=0/0, ticks=434/404, in_queue=838, util=88.28% 00:11:03.368 nvme0n2: ios=1577/1924, merge=0/0, ticks=419/396, in_queue=815, util=88.34% 00:11:03.368 nvme0n3: ios=1536/1937, merge=0/0, ticks=400/403, in_queue=803, util=89.13% 00:11:03.368 nvme0n4: ios=1536/1894, merge=0/0, ticks=402/399, in_queue=801, util=89.69% 00:11:03.368 02:55:42 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:03.368 [global] 00:11:03.368 thread=1 00:11:03.368 invalidate=1 00:11:03.368 rw=randwrite 00:11:03.368 time_based=1 00:11:03.368 runtime=1 00:11:03.368 ioengine=libaio 00:11:03.368 direct=1 00:11:03.368 bs=4096 00:11:03.368 iodepth=1 00:11:03.368 norandommap=0 00:11:03.368 numjobs=1 00:11:03.368 00:11:03.368 verify_dump=1 00:11:03.368 verify_backlog=512 00:11:03.368 verify_state_save=0 00:11:03.368 do_verify=1 00:11:03.368 verify=crc32c-intel 00:11:03.368 [job0] 00:11:03.368 filename=/dev/nvme0n1 00:11:03.368 [job1] 00:11:03.368 filename=/dev/nvme0n2 00:11:03.368 [job2] 00:11:03.368 filename=/dev/nvme0n3 00:11:03.368 [job3] 00:11:03.368 filename=/dev/nvme0n4 00:11:03.368 Could not set queue depth (nvme0n1) 00:11:03.368 Could not set queue depth (nvme0n2) 00:11:03.368 Could not set queue depth (nvme0n3) 00:11:03.368 Could not set queue depth (nvme0n4) 00:11:03.368 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.368 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.368 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.368 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.368 fio-3.35 00:11:03.368 Starting 4 threads 00:11:04.743 00:11:04.743 job0: (groupid=0, jobs=1): err= 0: pid=82107: Tue Apr 23 02:55:43 2024 00:11:04.743 read: IOPS=1406, BW=5626KiB/s (5761kB/s)(5632KiB/1001msec) 00:11:04.743 slat (usec): min=17, max=273, avg=24.10, stdev= 7.91 00:11:04.743 clat (usec): min=151, max=1047, avg=356.24, stdev=66.27 00:11:04.743 lat (usec): min=180, max=1075, avg=380.34, stdev=67.32 00:11:04.743 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 184], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:11:04.744 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 347], 60.00th=[ 355], 00:11:04.744 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 416], 00:11:04.744 | 99.00th=[ 676], 99.50th=[ 693], 99.90th=[ 1004], 99.95th=[ 1045], 00:11:04.744 | 99.99th=[ 1045] 00:11:04.744 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:04.744 slat (usec): min=21, max=103, avg=37.66, stdev= 6.49 00:11:04.744 clat (usec): min=107, max=341, avg=258.94, stdev=30.29 00:11:04.744 lat (usec): min=135, max=384, avg=296.60, stdev=31.34 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 123], 5.00th=[ 221], 10.00th=[ 235], 20.00th=[ 245], 00:11:04.744 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:11:04.744 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 293], 00:11:04.744 | 99.00th=[ 306], 99.50th=[ 314], 99.90th=[ 326], 99.95th=[ 343], 00:11:04.744 | 99.99th=[ 343] 00:11:04.744 bw ( KiB/s): min= 8192, max= 8192, per=26.69%, avg=8192.00, stdev= 0.00, samples=1 00:11:04.744 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:04.744 lat (usec) : 250=14.44%, 500=84.14%, 750=1.29%, 1000=0.07% 00:11:04.744 lat (msec) : 2=0.07% 00:11:04.744 cpu : usr=1.90%, sys=7.50%, ctx=2945, majf=0, minf=13 00:11:04.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 issued rwts: total=1408,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.744 job1: (groupid=0, jobs=1): err= 0: pid=82108: Tue Apr 23 02:55:43 2024 00:11:04.744 read: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec) 00:11:04.744 slat (nsec): min=10964, max=47977, avg=21153.47, stdev=5728.61 00:11:04.744 clat (usec): min=240, max=1965, avg=361.24, stdev=65.50 00:11:04.744 lat (usec): min=254, max=1992, avg=382.39, stdev=65.80 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 293], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 338], 00:11:04.744 | 30.00th=[ 343], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 359], 00:11:04.744 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 392], 95.00th=[ 441], 00:11:04.744 | 99.00th=[ 529], 99.50th=[ 570], 99.90th=[ 1500], 99.95th=[ 1958], 00:11:04.744 | 99.99th=[ 1958] 00:11:04.744 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:04.744 slat (nsec): min=16102, max=97073, avg=31263.80, stdev=7457.48 00:11:04.744 clat (usec): min=169, max=595, avg=270.58, stdev=28.19 00:11:04.744 lat (usec): min=200, max=621, avg=301.85, stdev=29.48 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 188], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 255], 00:11:04.744 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:11:04.744 | 70.00th=[ 281], 80.00th=[ 285], 90.00th=[ 297], 95.00th=[ 306], 00:11:04.744 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 441], 99.95th=[ 594], 00:11:04.744 | 99.99th=[ 594] 00:11:04.744 bw ( KiB/s): min= 7880, max= 7880, per=25.68%, avg=7880.00, stdev= 0.00, samples=1 00:11:04.744 iops : min= 1970, max= 1970, avg=1970.00, stdev= 0.00, samples=1 00:11:04.744 lat (usec) : 250=7.47%, 500=91.68%, 750=0.79% 00:11:04.744 lat (msec) : 2=0.07% 00:11:04.744 cpu : usr=2.20%, sys=6.30%, ctx=2920, majf=0, minf=5 00:11:04.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 issued rwts: total=1383,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.744 job2: (groupid=0, jobs=1): err= 0: pid=82109: Tue Apr 23 02:55:43 2024 00:11:04.744 read: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1001msec) 00:11:04.744 slat (usec): min=12, max=223, avg=16.55, stdev= 5.53 00:11:04.744 clat (usec): min=31, max=655, avg=179.54, stdev=19.26 00:11:04.744 lat (usec): min=162, max=678, avg=196.09, stdev=20.10 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:11:04.744 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:11:04.744 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 202], 00:11:04.744 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 449], 99.95th=[ 578], 00:11:04.744 | 99.99th=[ 660] 00:11:04.744 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:11:04.744 slat (nsec): min=14756, max=92273, avg=24004.51, stdev=4752.94 00:11:04.744 clat (usec): min=100, max=243, avg=131.46, stdev=12.21 00:11:04.744 lat (usec): min=120, max=336, avg=155.46, stdev=13.21 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 117], 20.00th=[ 122], 00:11:04.744 | 30.00th=[ 125], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 135], 00:11:04.744 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 153], 00:11:04.744 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 182], 00:11:04.744 | 99.99th=[ 245] 00:11:04.744 bw ( KiB/s): min=12288, max=12288, per=40.04%, avg=12288.00, stdev= 0.00, samples=1 00:11:04.744 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:11:04.744 lat (usec) : 50=0.02%, 250=99.89%, 500=0.05%, 750=0.04% 00:11:04.744 cpu : usr=3.20%, sys=8.70%, ctx=5681, majf=0, minf=15 00:11:04.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 issued rwts: total=2606,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.744 job3: (groupid=0, jobs=1): err= 0: pid=82110: Tue Apr 23 02:55:43 2024 00:11:04.744 read: IOPS=1381, BW=5526KiB/s (5659kB/s)(5532KiB/1001msec) 00:11:04.744 slat (nsec): min=9742, max=59681, avg=17514.08, stdev=5919.41 00:11:04.744 clat (usec): min=236, max=1875, avg=365.27, stdev=64.18 00:11:04.744 lat (usec): min=255, max=1895, avg=382.79, stdev=64.28 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 330], 20.00th=[ 338], 00:11:04.744 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 355], 60.00th=[ 363], 00:11:04.744 | 70.00th=[ 371], 80.00th=[ 379], 90.00th=[ 396], 95.00th=[ 453], 00:11:04.744 | 99.00th=[ 537], 99.50th=[ 578], 99.90th=[ 1467], 99.95th=[ 1876], 00:11:04.744 | 99.99th=[ 1876] 00:11:04.744 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:04.744 slat (usec): min=12, max=113, avg=24.54, stdev= 7.82 00:11:04.744 clat (usec): min=173, max=584, avg=277.70, stdev=31.27 00:11:04.744 lat (usec): min=195, max=610, avg=302.24, stdev=30.55 00:11:04.744 clat percentiles (usec): 00:11:04.744 | 1.00th=[ 190], 5.00th=[ 223], 10.00th=[ 247], 20.00th=[ 260], 00:11:04.744 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 285], 00:11:04.744 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:11:04.744 | 99.00th=[ 383], 99.50th=[ 396], 99.90th=[ 408], 99.95th=[ 586], 00:11:04.744 | 99.99th=[ 586] 00:11:04.744 bw ( KiB/s): min= 7872, max= 7872, per=25.65%, avg=7872.00, stdev= 0.00, samples=1 00:11:04.744 iops : min= 1968, max= 1968, avg=1968.00, stdev= 0.00, samples=1 00:11:04.744 lat (usec) : 250=6.85%, 500=92.15%, 750=0.92% 00:11:04.744 lat (msec) : 2=0.07% 00:11:04.744 cpu : usr=1.40%, sys=5.20%, ctx=2921, majf=0, minf=12 00:11:04.744 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:04.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:04.744 issued rwts: total=1383,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:04.744 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:04.744 00:11:04.744 Run status group 0 (all jobs): 00:11:04.744 READ: bw=26.5MiB/s (27.7MB/s), 5526KiB/s-10.2MiB/s (5659kB/s-10.7MB/s), io=26.5MiB (27.8MB), run=1001-1001msec 00:11:04.744 WRITE: bw=30.0MiB/s (31.4MB/s), 6138KiB/s-12.0MiB/s (6285kB/s-12.6MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:11:04.744 00:11:04.744 Disk stats (read/write): 00:11:04.744 nvme0n1: ios=1114/1536, merge=0/0, ticks=413/423, in_queue=836, util=88.88% 00:11:04.744 nvme0n2: ios=1091/1536, merge=0/0, ticks=396/413, in_queue=809, util=89.09% 00:11:04.744 nvme0n3: ios=2351/2560, merge=0/0, ticks=423/354, in_queue=777, util=89.31% 00:11:04.744 nvme0n4: ios=1042/1536, merge=0/0, ticks=350/381, in_queue=731, util=89.76% 00:11:04.744 02:55:43 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:04.744 [global] 00:11:04.744 thread=1 00:11:04.744 invalidate=1 00:11:04.744 rw=write 00:11:04.744 time_based=1 00:11:04.744 runtime=1 00:11:04.744 ioengine=libaio 00:11:04.744 direct=1 00:11:04.744 bs=4096 00:11:04.744 iodepth=128 00:11:04.744 norandommap=0 00:11:04.744 numjobs=1 00:11:04.744 00:11:04.744 verify_dump=1 00:11:04.744 verify_backlog=512 00:11:04.744 verify_state_save=0 00:11:04.744 do_verify=1 00:11:04.744 verify=crc32c-intel 00:11:04.744 [job0] 00:11:04.744 filename=/dev/nvme0n1 00:11:04.744 [job1] 00:11:04.744 filename=/dev/nvme0n2 00:11:04.744 [job2] 00:11:04.744 filename=/dev/nvme0n3 00:11:04.744 [job3] 00:11:04.744 filename=/dev/nvme0n4 00:11:04.744 Could not set queue depth (nvme0n1) 00:11:04.744 Could not set queue depth (nvme0n2) 00:11:04.744 Could not set queue depth (nvme0n3) 00:11:04.744 Could not set queue depth (nvme0n4) 00:11:04.744 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.744 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.744 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.744 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:04.744 fio-3.35 00:11:04.744 Starting 4 threads 00:11:06.119 00:11:06.119 job0: (groupid=0, jobs=1): err= 0: pid=82163: Tue Apr 23 02:55:44 2024 00:11:06.120 read: IOPS=3031, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1003msec) 00:11:06.120 slat (usec): min=3, max=6914, avg=163.25, stdev=821.27 00:11:06.120 clat (usec): min=367, max=23900, avg=20843.89, stdev=2608.10 00:11:06.120 lat (usec): min=2627, max=23919, avg=21007.15, stdev=2485.52 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[ 3326], 5.00th=[16909], 10.00th=[20579], 20.00th=[20841], 00:11:06.120 | 30.00th=[21103], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365], 00:11:06.120 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22676], 00:11:06.120 | 99.00th=[23987], 99.50th=[23987], 99.90th=[23987], 99.95th=[23987], 00:11:06.120 | 99.99th=[23987] 00:11:06.120 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:06.120 slat (usec): min=12, max=5057, avg=156.08, stdev=725.29 00:11:06.120 clat (usec): min=15569, max=24089, avg=20459.76, stdev=1056.02 00:11:06.120 lat (usec): min=17012, max=24130, avg=20615.84, stdev=760.32 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[16057], 5.00th=[19792], 10.00th=[20055], 20.00th=[20055], 00:11:06.120 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20317], 00:11:06.120 | 70.00th=[20579], 80.00th=[20841], 90.00th=[21890], 95.00th=[21890], 00:11:06.120 | 99.00th=[23725], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:11:06.120 | 99.99th=[23987] 00:11:06.120 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:11:06.120 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:06.120 lat (usec) : 500=0.02% 00:11:06.120 lat (msec) : 4=0.52%, 10=0.52%, 20=9.67%, 50=89.27% 00:11:06.120 cpu : usr=3.49%, sys=8.88%, ctx=230, majf=0, minf=11 00:11:06.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:06.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.120 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.120 job1: (groupid=0, jobs=1): err= 0: pid=82164: Tue Apr 23 02:55:44 2024 00:11:06.120 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1004msec) 00:11:06.120 slat (usec): min=6, max=5722, avg=173.18, stdev=557.44 00:11:06.120 clat (usec): min=763, max=28561, avg=21336.66, stdev=2709.07 00:11:06.120 lat (usec): min=3841, max=28575, avg=21509.83, stdev=2668.68 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[ 5932], 5.00th=[17957], 10.00th=[19006], 20.00th=[20579], 00:11:06.120 | 30.00th=[21103], 40.00th=[21627], 50.00th=[21890], 60.00th=[22152], 00:11:06.120 | 70.00th=[22414], 80.00th=[22676], 90.00th=[22938], 95.00th=[24511], 00:11:06.120 | 99.00th=[26870], 99.50th=[27132], 99.90th=[28443], 99.95th=[28443], 00:11:06.120 | 99.99th=[28443] 00:11:06.120 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:11:06.120 slat (usec): min=11, max=5548, avg=157.68, stdev=642.95 00:11:06.120 clat (usec): min=12911, max=27651, avg=21317.86, stdev=2040.86 00:11:06.120 lat (usec): min=13356, max=27671, avg=21475.54, stdev=1957.02 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[16057], 5.00th=[17695], 10.00th=[19268], 20.00th=[20055], 00:11:06.120 | 30.00th=[20317], 40.00th=[20579], 50.00th=[20841], 60.00th=[21890], 00:11:06.120 | 70.00th=[22152], 80.00th=[22676], 90.00th=[23462], 95.00th=[25035], 00:11:06.120 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27657], 99.95th=[27657], 00:11:06.120 | 99.99th=[27657] 00:11:06.120 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:11:06.120 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:06.120 lat (usec) : 1000=0.02% 00:11:06.120 lat (msec) : 4=0.10%, 10=0.62%, 20=15.84%, 50=83.42% 00:11:06.120 cpu : usr=2.79%, sys=8.18%, ctx=850, majf=0, minf=9 00:11:06.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:06.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.120 issued rwts: total=2862,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.120 job2: (groupid=0, jobs=1): err= 0: pid=82165: Tue Apr 23 02:55:44 2024 00:11:06.120 read: IOPS=2850, BW=11.1MiB/s (11.7MB/s)(11.2MiB/1002msec) 00:11:06.120 slat (usec): min=8, max=4694, avg=168.69, stdev=547.83 00:11:06.120 clat (usec): min=1822, max=26007, avg=21275.60, stdev=2882.93 00:11:06.120 lat (usec): min=1836, max=26858, avg=21444.29, stdev=2853.32 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[ 7308], 5.00th=[17957], 10.00th=[19006], 20.00th=[20579], 00:11:06.120 | 30.00th=[21103], 40.00th=[21890], 50.00th=[21890], 60.00th=[22152], 00:11:06.120 | 70.00th=[22414], 80.00th=[22676], 90.00th=[23200], 95.00th=[23725], 00:11:06.120 | 99.00th=[25297], 99.50th=[25560], 99.90th=[25822], 99.95th=[25822], 00:11:06.120 | 99.99th=[26084] 00:11:06.120 write: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec); 0 zone resets 00:11:06.120 slat (usec): min=10, max=5565, avg=161.27, stdev=652.25 00:11:06.120 clat (usec): min=15037, max=26732, avg=21275.59, stdev=1652.28 00:11:06.120 lat (usec): min=15818, max=26749, avg=21436.86, stdev=1547.34 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[16057], 5.00th=[19006], 10.00th=[19792], 20.00th=[20055], 00:11:06.120 | 30.00th=[20317], 40.00th=[20579], 50.00th=[21103], 60.00th=[21890], 00:11:06.120 | 70.00th=[22152], 80.00th=[22676], 90.00th=[22938], 95.00th=[24249], 00:11:06.120 | 99.00th=[25560], 99.50th=[26608], 99.90th=[26608], 99.95th=[26608], 00:11:06.120 | 99.99th=[26608] 00:11:06.120 bw ( KiB/s): min=12288, max=12312, per=25.12%, avg=12300.00, stdev=16.97, samples=2 00:11:06.120 iops : min= 3072, max= 3078, avg=3075.00, stdev= 4.24, samples=2 00:11:06.120 lat (msec) : 2=0.17%, 4=0.30%, 10=0.54%, 20=13.93%, 50=85.05% 00:11:06.120 cpu : usr=2.30%, sys=9.09%, ctx=735, majf=0, minf=19 00:11:06.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:06.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.120 issued rwts: total=2856,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.120 job3: (groupid=0, jobs=1): err= 0: pid=82166: Tue Apr 23 02:55:44 2024 00:11:06.120 read: IOPS=3031, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1003msec) 00:11:06.120 slat (usec): min=6, max=6702, avg=163.72, stdev=809.83 00:11:06.120 clat (usec): min=867, max=23562, avg=20831.77, stdev=2208.70 00:11:06.120 lat (usec): min=5615, max=23578, avg=20995.49, stdev=2064.12 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[ 6259], 5.00th=[16909], 10.00th=[19268], 20.00th=[20579], 00:11:06.120 | 30.00th=[20841], 40.00th=[21103], 50.00th=[21365], 60.00th=[21365], 00:11:06.120 | 70.00th=[21627], 80.00th=[21890], 90.00th=[22152], 95.00th=[22676], 00:11:06.120 | 99.00th=[23462], 99.50th=[23462], 99.90th=[23462], 99.95th=[23462], 00:11:06.120 | 99.99th=[23462] 00:11:06.120 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:06.120 slat (usec): min=19, max=4899, avg=154.08, stdev=698.53 00:11:06.120 clat (usec): min=14431, max=23052, avg=20469.59, stdev=1120.63 00:11:06.120 lat (usec): min=17061, max=23097, avg=20623.67, stdev=864.79 00:11:06.120 clat percentiles (usec): 00:11:06.120 | 1.00th=[16188], 5.00th=[18220], 10.00th=[19792], 20.00th=[20055], 00:11:06.120 | 30.00th=[20055], 40.00th=[20317], 50.00th=[20317], 60.00th=[20317], 00:11:06.120 | 70.00th=[20579], 80.00th=[21365], 90.00th=[21890], 95.00th=[22414], 00:11:06.120 | 99.00th=[22676], 99.50th=[22938], 99.90th=[22938], 99.95th=[22938], 00:11:06.120 | 99.99th=[22938] 00:11:06.120 bw ( KiB/s): min=12288, max=12288, per=25.10%, avg=12288.00, stdev= 0.00, samples=2 00:11:06.120 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:11:06.120 lat (usec) : 1000=0.02% 00:11:06.120 lat (msec) : 10=0.52%, 20=15.43%, 50=84.03% 00:11:06.120 cpu : usr=3.79%, sys=10.38%, ctx=193, majf=0, minf=11 00:11:06.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:11:06.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.120 issued rwts: total=3041,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.120 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.120 00:11:06.120 Run status group 0 (all jobs): 00:11:06.120 READ: bw=45.9MiB/s (48.1MB/s), 11.1MiB/s-11.8MiB/s (11.7MB/s-12.4MB/s), io=46.1MiB (48.3MB), run=1002-1004msec 00:11:06.120 WRITE: bw=47.8MiB/s (50.1MB/s), 12.0MiB/s-12.0MiB/s (12.5MB/s-12.6MB/s), io=48.0MiB (50.3MB), run=1002-1004msec 00:11:06.120 00:11:06.120 Disk stats (read/write): 00:11:06.120 nvme0n1: ios=2609/2656, merge=0/0, ticks=13015/12340, in_queue=25355, util=88.06% 00:11:06.120 nvme0n2: ios=2533/2560, merge=0/0, ticks=13525/12014, in_queue=25539, util=89.02% 00:11:06.120 nvme0n3: ios=2474/2560, merge=0/0, ticks=13165/12300, in_queue=25465, util=89.04% 00:11:06.120 nvme0n4: ios=2560/2688, merge=0/0, ticks=12722/12044, in_queue=24766, util=89.70% 00:11:06.120 02:55:44 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:06.120 [global] 00:11:06.120 thread=1 00:11:06.120 invalidate=1 00:11:06.120 rw=randwrite 00:11:06.120 time_based=1 00:11:06.120 runtime=1 00:11:06.120 ioengine=libaio 00:11:06.120 direct=1 00:11:06.120 bs=4096 00:11:06.120 iodepth=128 00:11:06.120 norandommap=0 00:11:06.120 numjobs=1 00:11:06.120 00:11:06.120 verify_dump=1 00:11:06.120 verify_backlog=512 00:11:06.120 verify_state_save=0 00:11:06.120 do_verify=1 00:11:06.120 verify=crc32c-intel 00:11:06.120 [job0] 00:11:06.120 filename=/dev/nvme0n1 00:11:06.120 [job1] 00:11:06.120 filename=/dev/nvme0n2 00:11:06.120 [job2] 00:11:06.120 filename=/dev/nvme0n3 00:11:06.120 [job3] 00:11:06.120 filename=/dev/nvme0n4 00:11:06.120 Could not set queue depth (nvme0n1) 00:11:06.120 Could not set queue depth (nvme0n2) 00:11:06.120 Could not set queue depth (nvme0n3) 00:11:06.120 Could not set queue depth (nvme0n4) 00:11:06.120 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.120 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.120 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.120 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.121 fio-3.35 00:11:06.121 Starting 4 threads 00:11:07.530 00:11:07.530 job0: (groupid=0, jobs=1): err= 0: pid=82225: Tue Apr 23 02:55:46 2024 00:11:07.530 read: IOPS=3259, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1004msec) 00:11:07.530 slat (usec): min=10, max=7058, avg=152.84, stdev=599.38 00:11:07.530 clat (usec): min=1525, max=30865, avg=19036.56, stdev=5458.56 00:11:07.530 lat (usec): min=3812, max=30880, avg=19189.40, stdev=5476.38 00:11:07.530 clat percentiles (usec): 00:11:07.530 | 1.00th=[ 7767], 5.00th=[13304], 10.00th=[13698], 20.00th=[13960], 00:11:07.530 | 30.00th=[14222], 40.00th=[14484], 50.00th=[18220], 60.00th=[22938], 00:11:07.530 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25035], 95.00th=[27132], 00:11:07.530 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30540], 99.95th=[30802], 00:11:07.530 | 99.99th=[30802] 00:11:07.530 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:11:07.530 slat (usec): min=12, max=5961, avg=131.39, stdev=562.87 00:11:07.530 clat (usec): min=10316, max=30818, avg=17987.40, stdev=5176.99 00:11:07.530 lat (usec): min=10443, max=31355, avg=18118.79, stdev=5185.54 00:11:07.530 clat percentiles (usec): 00:11:07.530 | 1.00th=[10945], 5.00th=[12780], 10.00th=[13042], 20.00th=[13304], 00:11:07.530 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[20579], 00:11:07.530 | 70.00th=[22676], 80.00th=[23462], 90.00th=[24511], 95.00th=[25560], 00:11:07.530 | 99.00th=[28967], 99.50th=[28967], 99.90th=[30016], 99.95th=[30802], 00:11:07.530 | 99.99th=[30802] 00:11:07.530 bw ( KiB/s): min=12288, max=16416, per=22.01%, avg=14352.00, stdev=2918.94, samples=2 00:11:07.530 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:11:07.530 lat (msec) : 2=0.01%, 4=0.09%, 10=0.38%, 20=55.71%, 50=43.81% 00:11:07.530 cpu : usr=3.79%, sys=10.27%, ctx=559, majf=0, minf=9 00:11:07.530 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:07.530 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.531 issued rwts: total=3273,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.531 job1: (groupid=0, jobs=1): err= 0: pid=82226: Tue Apr 23 02:55:46 2024 00:11:07.531 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:11:07.531 slat (usec): min=8, max=5236, avg=150.95, stdev=588.87 00:11:07.531 clat (usec): min=1216, max=33803, avg=19447.07, stdev=5566.13 00:11:07.531 lat (usec): min=3678, max=33820, avg=19598.02, stdev=5578.47 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[ 7439], 5.00th=[13566], 10.00th=[13829], 20.00th=[14222], 00:11:07.531 | 30.00th=[14484], 40.00th=[15270], 50.00th=[17171], 60.00th=[23725], 00:11:07.531 | 70.00th=[24249], 80.00th=[24511], 90.00th=[25560], 95.00th=[27132], 00:11:07.531 | 99.00th=[30540], 99.50th=[31065], 99.90th=[33162], 99.95th=[33817], 00:11:07.531 | 99.99th=[33817] 00:11:07.531 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:07.531 slat (usec): min=11, max=7259, avg=137.33, stdev=595.96 00:11:07.531 clat (usec): min=10062, max=27376, avg=18025.34, stdev=4777.80 00:11:07.531 lat (usec): min=10154, max=27396, avg=18162.67, stdev=4785.28 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[11076], 5.00th=[12911], 10.00th=[13042], 20.00th=[13304], 00:11:07.531 | 30.00th=[13566], 40.00th=[13829], 50.00th=[15926], 60.00th=[21365], 00:11:07.531 | 70.00th=[22414], 80.00th=[23200], 90.00th=[24249], 95.00th=[24773], 00:11:07.531 | 99.00th=[26084], 99.50th=[26084], 99.90th=[27132], 99.95th=[27395], 00:11:07.531 | 99.99th=[27395] 00:11:07.531 bw ( KiB/s): min=12288, max=16384, per=21.98%, avg=14336.00, stdev=2896.31, samples=2 00:11:07.531 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:11:07.531 lat (msec) : 2=0.01%, 4=0.27%, 10=0.21%, 20=52.72%, 50=46.79% 00:11:07.531 cpu : usr=3.08%, sys=9.75%, ctx=578, majf=0, minf=15 00:11:07.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:07.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.531 issued rwts: total=3201,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.531 job2: (groupid=0, jobs=1): err= 0: pid=82227: Tue Apr 23 02:55:46 2024 00:11:07.531 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(17.9MiB/1003msec) 00:11:07.531 slat (usec): min=5, max=3617, avg=104.76, stdev=473.18 00:11:07.531 clat (usec): min=2279, max=16479, avg=13923.48, stdev=1657.68 00:11:07.531 lat (usec): min=2327, max=16494, avg=14028.23, stdev=1599.32 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[ 6063], 5.00th=[12649], 10.00th=[12911], 20.00th=[13173], 00:11:07.531 | 30.00th=[13435], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:11:07.531 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16057], 95.00th=[16188], 00:11:07.531 | 99.00th=[16450], 99.50th=[16450], 99.90th=[16450], 99.95th=[16450], 00:11:07.531 | 99.99th=[16450] 00:11:07.531 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:07.531 slat (usec): min=10, max=5220, avg=104.30, stdev=443.85 00:11:07.531 clat (usec): min=9760, max=18402, avg=13644.13, stdev=1274.25 00:11:07.531 lat (usec): min=10810, max=18447, avg=13748.42, stdev=1217.17 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[10683], 5.00th=[12387], 10.00th=[12649], 20.00th=[12780], 00:11:07.531 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:11:07.531 | 70.00th=[14091], 80.00th=[14746], 90.00th=[15533], 95.00th=[15926], 00:11:07.531 | 99.00th=[17957], 99.50th=[18220], 99.90th=[18482], 99.95th=[18482], 00:11:07.531 | 99.99th=[18482] 00:11:07.531 bw ( KiB/s): min=17450, max=19448, per=28.29%, avg=18449.00, stdev=1412.80, samples=2 00:11:07.531 iops : min= 4362, max= 4862, avg=4612.00, stdev=353.55, samples=2 00:11:07.531 lat (msec) : 4=0.29%, 10=0.74%, 20=98.97% 00:11:07.531 cpu : usr=4.59%, sys=14.07%, ctx=351, majf=0, minf=15 00:11:07.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:07.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.531 issued rwts: total=4571,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.531 job3: (groupid=0, jobs=1): err= 0: pid=82228: Tue Apr 23 02:55:46 2024 00:11:07.531 read: IOPS=4467, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1003msec) 00:11:07.531 slat (usec): min=9, max=3658, avg=106.89, stdev=504.53 00:11:07.531 clat (usec): min=435, max=18925, avg=14091.07, stdev=1713.20 00:11:07.531 lat (usec): min=3326, max=18947, avg=14197.96, stdev=1643.56 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[ 7111], 5.00th=[12780], 10.00th=[13173], 20.00th=[13304], 00:11:07.531 | 30.00th=[13435], 40.00th=[13566], 50.00th=[13698], 60.00th=[13829], 00:11:07.531 | 70.00th=[14746], 80.00th=[15533], 90.00th=[16057], 95.00th=[16319], 00:11:07.531 | 99.00th=[17695], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:11:07.531 | 99.99th=[19006] 00:11:07.531 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:11:07.531 slat (usec): min=11, max=5100, avg=104.92, stdev=450.50 00:11:07.531 clat (usec): min=10134, max=16098, avg=13756.61, stdev=1183.60 00:11:07.531 lat (usec): min=12414, max=17661, avg=13861.53, stdev=1103.98 00:11:07.531 clat percentiles (usec): 00:11:07.531 | 1.00th=[10683], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:11:07.531 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:11:07.531 | 70.00th=[14615], 80.00th=[15139], 90.00th=[15664], 95.00th=[15926], 00:11:07.531 | 99.00th=[16057], 99.50th=[16057], 99.90th=[16057], 99.95th=[16057], 00:11:07.531 | 99.99th=[16057] 00:11:07.531 bw ( KiB/s): min=17194, max=19704, per=28.29%, avg=18449.00, stdev=1774.84, samples=2 00:11:07.531 iops : min= 4298, max= 4926, avg=4612.00, stdev=444.06, samples=2 00:11:07.531 lat (usec) : 500=0.01% 00:11:07.531 lat (msec) : 4=0.31%, 10=0.40%, 20=99.28% 00:11:07.531 cpu : usr=5.29%, sys=12.87%, ctx=320, majf=0, minf=11 00:11:07.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:07.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:07.531 issued rwts: total=4481,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:07.531 00:11:07.531 Run status group 0 (all jobs): 00:11:07.531 READ: bw=60.3MiB/s (63.3MB/s), 12.4MiB/s-17.8MiB/s (13.0MB/s-18.7MB/s), io=60.6MiB (63.6MB), run=1003-1005msec 00:11:07.531 WRITE: bw=63.7MiB/s (66.8MB/s), 13.9MiB/s-17.9MiB/s (14.6MB/s-18.8MB/s), io=64.0MiB (67.1MB), run=1003-1005msec 00:11:07.531 00:11:07.531 Disk stats (read/write): 00:11:07.531 nvme0n1: ios=3033/3072, merge=0/0, ticks=13468/11363, in_queue=24831, util=87.75% 00:11:07.531 nvme0n2: ios=2933/3072, merge=0/0, ticks=13000/11316, in_queue=24316, util=88.18% 00:11:07.531 nvme0n3: ios=3680/4096, merge=0/0, ticks=11878/12356, in_queue=24234, util=89.00% 00:11:07.531 nvme0n4: ios=3584/4096, merge=0/0, ticks=11663/12340, in_queue=24003, util=89.56% 00:11:07.531 02:55:46 -- target/fio.sh@55 -- # sync 00:11:07.531 02:55:46 -- target/fio.sh@59 -- # fio_pid=82241 00:11:07.531 02:55:46 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:07.531 02:55:46 -- target/fio.sh@61 -- # sleep 3 00:11:07.531 [global] 00:11:07.531 thread=1 00:11:07.531 invalidate=1 00:11:07.531 rw=read 00:11:07.531 time_based=1 00:11:07.531 runtime=10 00:11:07.531 ioengine=libaio 00:11:07.531 direct=1 00:11:07.531 bs=4096 00:11:07.531 iodepth=1 00:11:07.531 norandommap=1 00:11:07.531 numjobs=1 00:11:07.531 00:11:07.531 [job0] 00:11:07.531 filename=/dev/nvme0n1 00:11:07.531 [job1] 00:11:07.531 filename=/dev/nvme0n2 00:11:07.531 [job2] 00:11:07.531 filename=/dev/nvme0n3 00:11:07.531 [job3] 00:11:07.531 filename=/dev/nvme0n4 00:11:07.531 Could not set queue depth (nvme0n1) 00:11:07.531 Could not set queue depth (nvme0n2) 00:11:07.531 Could not set queue depth (nvme0n3) 00:11:07.531 Could not set queue depth (nvme0n4) 00:11:07.531 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.531 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.531 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.531 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:07.531 fio-3.35 00:11:07.531 Starting 4 threads 00:11:10.817 02:55:49 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:10.817 fio: pid=82284, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:10.817 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=33251328, buflen=4096 00:11:10.817 02:55:49 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:10.817 fio: pid=82283, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:10.817 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=36933632, buflen=4096 00:11:10.817 02:55:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:10.817 02:55:49 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:11.076 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=50896896, buflen=4096 00:11:11.076 fio: pid=82281, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:11.076 02:55:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.076 02:55:50 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:11.335 fio: pid=82282, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:11.335 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=10797056, buflen=4096 00:11:11.335 00:11:11.335 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82281: Tue Apr 23 02:55:50 2024 00:11:11.335 read: IOPS=3637, BW=14.2MiB/s (14.9MB/s)(48.5MiB/3416msec) 00:11:11.335 slat (usec): min=8, max=15251, avg=19.73, stdev=177.94 00:11:11.335 clat (nsec): min=1564, max=6747.2k, avg=253469.81, stdev=151069.34 00:11:11.335 lat (usec): min=146, max=15515, avg=273.20, stdev=234.38 00:11:11.335 clat percentiles (usec): 00:11:11.335 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 163], 00:11:11.335 | 30.00th=[ 172], 40.00th=[ 180], 50.00th=[ 210], 60.00th=[ 265], 00:11:11.335 | 70.00th=[ 351], 80.00th=[ 363], 90.00th=[ 375], 95.00th=[ 383], 00:11:11.335 | 99.00th=[ 404], 99.50th=[ 412], 99.90th=[ 1958], 99.95th=[ 3687], 00:11:11.335 | 99.99th=[ 5211] 00:11:11.335 bw ( KiB/s): min=10408, max=22208, per=27.31%, avg=14481.33, stdev=5391.80, samples=6 00:11:11.335 iops : min= 2602, max= 5552, avg=3620.33, stdev=1347.95, samples=6 00:11:11.335 lat (usec) : 2=0.01%, 250=58.74%, 500=41.06%, 750=0.06% 00:11:11.335 lat (msec) : 2=0.04%, 4=0.07%, 10=0.02% 00:11:11.335 cpu : usr=1.20%, sys=5.74%, ctx=12450, majf=0, minf=1 00:11:11.335 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.335 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.335 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.335 issued rwts: total=12427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.335 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.335 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82282: Tue Apr 23 02:55:50 2024 00:11:11.335 read: IOPS=5189, BW=20.3MiB/s (21.3MB/s)(74.3MiB/3665msec) 00:11:11.335 slat (usec): min=9, max=15384, avg=18.75, stdev=217.87 00:11:11.335 clat (usec): min=131, max=3461, avg=172.51, stdev=47.28 00:11:11.335 lat (usec): min=144, max=15939, avg=191.26, stdev=225.02 00:11:11.335 clat percentiles (usec): 00:11:11.335 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 155], 00:11:11.335 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:11:11.336 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 200], 95.00th=[ 221], 00:11:11.336 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 449], 99.95th=[ 644], 00:11:11.336 | 99.99th=[ 3032] 00:11:11.336 bw ( KiB/s): min=16990, max=22288, per=39.39%, avg=20886.57, stdev=1885.81, samples=7 00:11:11.336 iops : min= 4247, max= 5572, avg=5221.57, stdev=471.63, samples=7 00:11:11.336 lat (usec) : 250=96.92%, 500=2.99%, 750=0.05%, 1000=0.01% 00:11:11.336 lat (msec) : 2=0.01%, 4=0.02% 00:11:11.336 cpu : usr=1.61%, sys=6.63%, ctx=19046, majf=0, minf=1 00:11:11.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 issued rwts: total=19021,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.336 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82283: Tue Apr 23 02:55:50 2024 00:11:11.336 read: IOPS=2867, BW=11.2MiB/s (11.7MB/s)(35.2MiB/3145msec) 00:11:11.336 slat (usec): min=9, max=12772, avg=18.44, stdev=155.70 00:11:11.336 clat (usec): min=150, max=2948, avg=328.52, stdev=60.32 00:11:11.336 lat (usec): min=166, max=13097, avg=346.96, stdev=166.24 00:11:11.336 clat percentiles (usec): 00:11:11.336 | 1.00th=[ 186], 5.00th=[ 262], 10.00th=[ 273], 20.00th=[ 285], 00:11:11.336 | 30.00th=[ 293], 40.00th=[ 306], 50.00th=[ 326], 60.00th=[ 355], 00:11:11.336 | 70.00th=[ 367], 80.00th=[ 375], 90.00th=[ 388], 95.00th=[ 396], 00:11:11.336 | 99.00th=[ 408], 99.50th=[ 416], 99.90th=[ 478], 99.95th=[ 717], 00:11:11.336 | 99.99th=[ 2933] 00:11:11.336 bw ( KiB/s): min=10400, max=12776, per=21.52%, avg=11409.33, stdev=1085.87, samples=6 00:11:11.336 iops : min= 2600, max= 3194, avg=2852.33, stdev=271.47, samples=6 00:11:11.336 lat (usec) : 250=3.33%, 500=96.57%, 750=0.04% 00:11:11.336 lat (msec) : 2=0.03%, 4=0.01% 00:11:11.336 cpu : usr=0.76%, sys=4.45%, ctx=9037, majf=0, minf=1 00:11:11.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 issued rwts: total=9018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.336 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=82284: Tue Apr 23 02:55:50 2024 00:11:11.336 read: IOPS=2794, BW=10.9MiB/s (11.4MB/s)(31.7MiB/2905msec) 00:11:11.336 slat (usec): min=13, max=236, avg=22.02, stdev= 7.14 00:11:11.336 clat (usec): min=163, max=2564, avg=333.39, stdev=61.71 00:11:11.336 lat (usec): min=178, max=2590, avg=355.41, stdev=64.50 00:11:11.336 clat percentiles (usec): 00:11:11.336 | 1.00th=[ 253], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 289], 00:11:11.336 | 30.00th=[ 297], 40.00th=[ 310], 50.00th=[ 338], 60.00th=[ 351], 00:11:11.336 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 379], 95.00th=[ 388], 00:11:11.336 | 99.00th=[ 578], 99.50th=[ 594], 99.90th=[ 676], 99.95th=[ 1123], 00:11:11.336 | 99.99th=[ 2573] 00:11:11.336 bw ( KiB/s): min=10064, max=12824, per=21.45%, avg=11372.80, stdev=1363.86, samples=5 00:11:11.336 iops : min= 2516, max= 3206, avg=2843.20, stdev=340.97, samples=5 00:11:11.336 lat (usec) : 250=0.78%, 500=97.65%, 750=1.47%, 1000=0.04% 00:11:11.336 lat (msec) : 2=0.05%, 4=0.01% 00:11:11.336 cpu : usr=1.17%, sys=5.27%, ctx=8120, majf=0, minf=1 00:11:11.336 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:11.336 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.336 issued rwts: total=8119,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.336 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:11.336 00:11:11.336 Run status group 0 (all jobs): 00:11:11.336 READ: bw=51.8MiB/s (54.3MB/s), 10.9MiB/s-20.3MiB/s (11.4MB/s-21.3MB/s), io=190MiB (199MB), run=2905-3665msec 00:11:11.336 00:11:11.336 Disk stats (read/write): 00:11:11.336 nvme0n1: ios=12232/0, merge=0/0, ticks=3050/0, in_queue=3050, util=94.99% 00:11:11.336 nvme0n2: ios=18760/0, merge=0/0, ticks=3277/0, in_queue=3277, util=95.02% 00:11:11.336 nvme0n3: ios=8941/0, merge=0/0, ticks=2816/0, in_queue=2816, util=96.18% 00:11:11.336 nvme0n4: ios=8036/0, merge=0/0, ticks=2733/0, in_queue=2733, util=96.76% 00:11:11.336 02:55:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.336 02:55:50 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:11.595 02:55:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.595 02:55:50 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:11.854 02:55:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:11.854 02:55:50 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:12.113 02:55:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.113 02:55:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:12.373 02:55:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:12.373 02:55:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:12.632 02:55:51 -- target/fio.sh@69 -- # fio_status=0 00:11:12.632 02:55:51 -- target/fio.sh@70 -- # wait 82241 00:11:12.632 02:55:51 -- target/fio.sh@70 -- # fio_status=4 00:11:12.632 02:55:51 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.632 02:55:51 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.632 02:55:51 -- common/autotest_common.sh@1205 -- # local i=0 00:11:12.632 02:55:51 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:12.632 02:55:51 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.632 02:55:51 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.632 02:55:51 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:12.632 nvmf hotplug test: fio failed as expected 00:11:12.632 02:55:51 -- common/autotest_common.sh@1217 -- # return 0 00:11:12.632 02:55:51 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:12.632 02:55:51 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:12.632 02:55:51 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:12.891 02:55:51 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:12.891 02:55:51 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:12.891 02:55:51 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:12.891 02:55:51 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:12.891 02:55:51 -- target/fio.sh@91 -- # nvmftestfini 00:11:12.891 02:55:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:12.891 02:55:51 -- nvmf/common.sh@117 -- # sync 00:11:12.891 02:55:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:12.891 02:55:51 -- nvmf/common.sh@120 -- # set +e 00:11:12.891 02:55:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:12.891 02:55:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:12.891 rmmod nvme_tcp 00:11:12.891 rmmod nvme_fabrics 00:11:12.891 rmmod nvme_keyring 00:11:12.891 02:55:52 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.150 02:55:52 -- nvmf/common.sh@124 -- # set -e 00:11:13.150 02:55:52 -- nvmf/common.sh@125 -- # return 0 00:11:13.150 02:55:52 -- nvmf/common.sh@478 -- # '[' -n 81855 ']' 00:11:13.150 02:55:52 -- nvmf/common.sh@479 -- # killprocess 81855 00:11:13.150 02:55:52 -- common/autotest_common.sh@936 -- # '[' -z 81855 ']' 00:11:13.151 02:55:52 -- common/autotest_common.sh@940 -- # kill -0 81855 00:11:13.151 02:55:52 -- common/autotest_common.sh@941 -- # uname 00:11:13.151 02:55:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:13.151 02:55:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81855 00:11:13.151 killing process with pid 81855 00:11:13.151 02:55:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:13.151 02:55:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:13.151 02:55:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81855' 00:11:13.151 02:55:52 -- common/autotest_common.sh@955 -- # kill 81855 00:11:13.151 02:55:52 -- common/autotest_common.sh@960 -- # wait 81855 00:11:13.151 02:55:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:13.151 02:55:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:13.151 02:55:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:13.151 02:55:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.151 02:55:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.151 02:55:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.151 02:55:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.151 02:55:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.151 02:55:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:13.151 00:11:13.151 real 0m19.308s 00:11:13.151 user 1m13.476s 00:11:13.151 sys 0m10.094s 00:11:13.151 02:55:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:13.151 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.151 ************************************ 00:11:13.151 END TEST nvmf_fio_target 00:11:13.151 ************************************ 00:11:13.151 02:55:52 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.410 02:55:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:13.410 02:55:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.410 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.410 ************************************ 00:11:13.410 START TEST nvmf_bdevio 00:11:13.410 ************************************ 00:11:13.410 02:55:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:13.410 * Looking for test storage... 00:11:13.410 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:13.410 02:55:52 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:13.410 02:55:52 -- nvmf/common.sh@7 -- # uname -s 00:11:13.410 02:55:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.410 02:55:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.410 02:55:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.410 02:55:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.410 02:55:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.410 02:55:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.410 02:55:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.410 02:55:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.410 02:55:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.410 02:55:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.410 02:55:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:13.410 02:55:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:13.410 02:55:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.410 02:55:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.410 02:55:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:13.410 02:55:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.410 02:55:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:13.410 02:55:52 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.410 02:55:52 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.410 02:55:52 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.410 02:55:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.410 02:55:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.410 02:55:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.410 02:55:52 -- paths/export.sh@5 -- # export PATH 00:11:13.411 02:55:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.411 02:55:52 -- nvmf/common.sh@47 -- # : 0 00:11:13.411 02:55:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:13.411 02:55:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:13.411 02:55:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:13.411 02:55:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.411 02:55:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.411 02:55:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:13.411 02:55:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:13.411 02:55:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:13.411 02:55:52 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:13.411 02:55:52 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:13.411 02:55:52 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:13.411 02:55:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:13.411 02:55:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.411 02:55:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:13.411 02:55:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:13.411 02:55:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:13.411 02:55:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.411 02:55:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.411 02:55:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.411 02:55:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:13.411 02:55:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:13.411 02:55:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:13.411 02:55:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:13.411 02:55:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:13.411 02:55:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:13.411 02:55:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:13.411 02:55:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:13.411 02:55:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:13.411 02:55:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:13.411 02:55:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:13.411 02:55:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:13.411 02:55:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:13.411 02:55:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:13.411 02:55:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:13.411 02:55:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:13.411 02:55:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:13.411 02:55:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:13.411 02:55:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:13.411 02:55:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:13.411 Cannot find device "nvmf_tgt_br" 00:11:13.411 02:55:52 -- nvmf/common.sh@155 -- # true 00:11:13.411 02:55:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:13.411 Cannot find device "nvmf_tgt_br2" 00:11:13.411 02:55:52 -- nvmf/common.sh@156 -- # true 00:11:13.411 02:55:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:13.411 02:55:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:13.411 Cannot find device "nvmf_tgt_br" 00:11:13.411 02:55:52 -- nvmf/common.sh@158 -- # true 00:11:13.411 02:55:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:13.411 Cannot find device "nvmf_tgt_br2" 00:11:13.411 02:55:52 -- nvmf/common.sh@159 -- # true 00:11:13.411 02:55:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:13.671 02:55:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:13.671 02:55:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.671 02:55:52 -- nvmf/common.sh@162 -- # true 00:11:13.671 02:55:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:13.671 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:13.671 02:55:52 -- nvmf/common.sh@163 -- # true 00:11:13.671 02:55:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:13.671 02:55:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:13.671 02:55:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:13.671 02:55:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:13.671 02:55:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:13.671 02:55:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:13.671 02:55:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:13.671 02:55:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:13.671 02:55:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:13.671 02:55:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:13.671 02:55:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:13.671 02:55:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:13.671 02:55:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:13.671 02:55:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:13.671 02:55:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:13.671 02:55:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:13.671 02:55:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:13.671 02:55:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:13.671 02:55:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:13.671 02:55:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:13.671 02:55:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:13.671 02:55:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:13.671 02:55:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:13.671 02:55:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:13.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:13.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:11:13.671 00:11:13.671 --- 10.0.0.2 ping statistics --- 00:11:13.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.671 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:11:13.671 02:55:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:13.671 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:13.671 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:13.671 00:11:13.671 --- 10.0.0.3 ping statistics --- 00:11:13.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.671 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:13.671 02:55:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:13.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:13.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:13.671 00:11:13.671 --- 10.0.0.1 ping statistics --- 00:11:13.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:13.671 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:13.671 02:55:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:13.671 02:55:52 -- nvmf/common.sh@422 -- # return 0 00:11:13.671 02:55:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:13.671 02:55:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:13.671 02:55:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:13.671 02:55:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:13.671 02:55:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:13.671 02:55:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:13.671 02:55:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:13.671 02:55:52 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:13.671 02:55:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:13.671 02:55:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:13.671 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.671 02:55:52 -- nvmf/common.sh@470 -- # nvmfpid=82552 00:11:13.671 02:55:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:13.671 02:55:52 -- nvmf/common.sh@471 -- # waitforlisten 82552 00:11:13.671 02:55:52 -- common/autotest_common.sh@817 -- # '[' -z 82552 ']' 00:11:13.671 02:55:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.671 02:55:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:13.671 02:55:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.671 02:55:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:13.671 02:55:52 -- common/autotest_common.sh@10 -- # set +x 00:11:13.930 [2024-04-23 02:55:52.850217] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:13.930 [2024-04-23 02:55:52.850292] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.930 [2024-04-23 02:55:52.973187] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:13.930 [2024-04-23 02:55:52.992195] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.930 [2024-04-23 02:55:53.032544] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.930 [2024-04-23 02:55:53.032598] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.930 [2024-04-23 02:55:53.032611] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.930 [2024-04-23 02:55:53.032622] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.930 [2024-04-23 02:55:53.032631] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.930 [2024-04-23 02:55:53.032764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:13.930 [2024-04-23 02:55:53.032811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:13.930 [2024-04-23 02:55:53.032954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:13.930 [2024-04-23 02:55:53.032960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.189 02:55:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:14.189 02:55:53 -- common/autotest_common.sh@850 -- # return 0 00:11:14.189 02:55:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:14.189 02:55:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 02:55:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.189 02:55:53 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:14.189 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 [2024-04-23 02:55:53.159505] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.189 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.189 02:55:53 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:14.189 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 Malloc0 00:11:14.189 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.189 02:55:53 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:14.189 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.189 02:55:53 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:14.189 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.189 02:55:53 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.189 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.189 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.189 [2024-04-23 02:55:53.216732] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.189 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.189 02:55:53 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:14.189 02:55:53 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:14.189 02:55:53 -- nvmf/common.sh@521 -- # config=() 00:11:14.189 02:55:53 -- nvmf/common.sh@521 -- # local subsystem config 00:11:14.189 02:55:53 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:14.189 02:55:53 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:14.189 { 00:11:14.189 "params": { 00:11:14.189 "name": "Nvme$subsystem", 00:11:14.189 "trtype": "$TEST_TRANSPORT", 00:11:14.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:14.189 "adrfam": "ipv4", 00:11:14.189 "trsvcid": "$NVMF_PORT", 00:11:14.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:14.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:14.189 "hdgst": ${hdgst:-false}, 00:11:14.189 "ddgst": ${ddgst:-false} 00:11:14.189 }, 00:11:14.189 "method": "bdev_nvme_attach_controller" 00:11:14.189 } 00:11:14.189 EOF 00:11:14.189 )") 00:11:14.189 02:55:53 -- nvmf/common.sh@543 -- # cat 00:11:14.189 02:55:53 -- nvmf/common.sh@545 -- # jq . 00:11:14.189 02:55:53 -- nvmf/common.sh@546 -- # IFS=, 00:11:14.189 02:55:53 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:14.189 "params": { 00:11:14.189 "name": "Nvme1", 00:11:14.189 "trtype": "tcp", 00:11:14.189 "traddr": "10.0.0.2", 00:11:14.189 "adrfam": "ipv4", 00:11:14.189 "trsvcid": "4420", 00:11:14.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:14.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:14.189 "hdgst": false, 00:11:14.189 "ddgst": false 00:11:14.189 }, 00:11:14.189 "method": "bdev_nvme_attach_controller" 00:11:14.189 }' 00:11:14.189 [2024-04-23 02:55:53.271022] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:14.189 [2024-04-23 02:55:53.271099] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82585 ] 00:11:14.448 [2024-04-23 02:55:53.393370] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:14.448 [2024-04-23 02:55:53.413964] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:14.448 [2024-04-23 02:55:53.454850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:14.448 [2024-04-23 02:55:53.455013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:14.448 [2024-04-23 02:55:53.455016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.448 I/O targets: 00:11:14.448 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:14.448 00:11:14.448 00:11:14.448 CUnit - A unit testing framework for C - Version 2.1-3 00:11:14.448 http://cunit.sourceforge.net/ 00:11:14.448 00:11:14.448 00:11:14.448 Suite: bdevio tests on: Nvme1n1 00:11:14.448 Test: blockdev write read block ...passed 00:11:14.448 Test: blockdev write zeroes read block ...passed 00:11:14.707 Test: blockdev write zeroes read no split ...passed 00:11:14.707 Test: blockdev write zeroes read split ...passed 00:11:14.707 Test: blockdev write zeroes read split partial ...passed 00:11:14.707 Test: blockdev reset ...[2024-04-23 02:55:53.625776] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:14.707 [2024-04-23 02:55:53.625895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a4680 (9): Bad file descriptor 00:11:14.707 passed 00:11:14.707 Test: blockdev write read 8 blocks ...[2024-04-23 02:55:53.643082] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:14.707 passed 00:11:14.707 Test: blockdev write read size > 128k ...passed 00:11:14.707 Test: blockdev write read invalid size ...passed 00:11:14.707 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:14.707 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:14.707 Test: blockdev write read max offset ...passed 00:11:14.707 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:14.707 Test: blockdev writev readv 8 blocks ...passed 00:11:14.707 Test: blockdev writev readv 30 x 1block ...passed 00:11:14.707 Test: blockdev writev readv block ...passed 00:11:14.707 Test: blockdev writev readv size > 128k ...passed 00:11:14.707 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:14.707 Test: blockdev comparev and writev ...[2024-04-23 02:55:53.653639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.653684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.653713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.653738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:14.707 passed 00:11:14.707 Test: blockdev nvme passthru rw ...[2024-04-23 02:55:53.654234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.654270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.654293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.654306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.654624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.654644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.654664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.654676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.654986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.655006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.655026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:14.707 [2024-04-23 02:55:53.655039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:14.707 passed 00:11:14.707 Test: blockdev nvme passthru vendor specific ...[2024-04-23 02:55:53.656259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.707 [2024-04-23 02:55:53.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.656986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.707 [2024-04-23 02:55:53.657022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:14.707 [2024-04-23 02:55:53.657177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.707 [2024-04-23 02:55:53.657198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:14.707 passed 00:11:14.707 Test: blockdev nvme admin passthru ...[2024-04-23 02:55:53.657340] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:14.707 [2024-04-23 02:55:53.657367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:14.707 passed 00:11:14.707 Test: blockdev copy ...passed 00:11:14.707 00:11:14.707 Run Summary: Type Total Ran Passed Failed Inactive 00:11:14.707 suites 1 1 n/a 0 0 00:11:14.707 tests 23 23 23 0 0 00:11:14.707 asserts 152 152 152 0 n/a 00:11:14.707 00:11:14.707 Elapsed time = 0.162 seconds 00:11:14.707 02:55:53 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.707 02:55:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:14.707 02:55:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.707 02:55:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:14.707 02:55:53 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:14.707 02:55:53 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:14.707 02:55:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:14.707 02:55:53 -- nvmf/common.sh@117 -- # sync 00:11:14.707 02:55:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.707 02:55:53 -- nvmf/common.sh@120 -- # set +e 00:11:14.707 02:55:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.707 02:55:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.966 rmmod nvme_tcp 00:11:14.966 rmmod nvme_fabrics 00:11:14.966 rmmod nvme_keyring 00:11:14.966 02:55:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.966 02:55:53 -- nvmf/common.sh@124 -- # set -e 00:11:14.966 02:55:53 -- nvmf/common.sh@125 -- # return 0 00:11:14.966 02:55:53 -- nvmf/common.sh@478 -- # '[' -n 82552 ']' 00:11:14.966 02:55:53 -- nvmf/common.sh@479 -- # killprocess 82552 00:11:14.966 02:55:53 -- common/autotest_common.sh@936 -- # '[' -z 82552 ']' 00:11:14.966 02:55:53 -- common/autotest_common.sh@940 -- # kill -0 82552 00:11:14.966 02:55:53 -- common/autotest_common.sh@941 -- # uname 00:11:14.966 02:55:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:14.966 02:55:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82552 00:11:14.966 killing process with pid 82552 00:11:14.966 02:55:53 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:14.966 02:55:53 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:14.966 02:55:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82552' 00:11:14.966 02:55:53 -- common/autotest_common.sh@955 -- # kill 82552 00:11:14.966 02:55:53 -- common/autotest_common.sh@960 -- # wait 82552 00:11:14.966 02:55:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:14.966 02:55:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:14.966 02:55:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:14.966 02:55:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.966 02:55:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.966 02:55:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.966 02:55:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.966 02:55:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.226 02:55:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:15.226 00:11:15.226 real 0m1.758s 00:11:15.226 user 0m5.198s 00:11:15.226 sys 0m0.597s 00:11:15.226 02:55:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:15.226 02:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 ************************************ 00:11:15.226 END TEST nvmf_bdevio 00:11:15.226 ************************************ 00:11:15.226 02:55:54 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:11:15.226 02:55:54 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:15.226 02:55:54 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:15.226 02:55:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:15.226 02:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.226 ************************************ 00:11:15.226 START TEST nvmf_bdevio_no_huge 00:11:15.226 ************************************ 00:11:15.226 02:55:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:15.226 * Looking for test storage... 00:11:15.226 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:15.226 02:55:54 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:15.226 02:55:54 -- nvmf/common.sh@7 -- # uname -s 00:11:15.226 02:55:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.226 02:55:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.226 02:55:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.226 02:55:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.226 02:55:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.226 02:55:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.226 02:55:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.226 02:55:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.226 02:55:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.226 02:55:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.226 02:55:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:15.226 02:55:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:15.226 02:55:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.226 02:55:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.226 02:55:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:15.226 02:55:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.226 02:55:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:15.226 02:55:54 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.226 02:55:54 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.226 02:55:54 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.226 02:55:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.226 02:55:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.227 02:55:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.227 02:55:54 -- paths/export.sh@5 -- # export PATH 00:11:15.227 02:55:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.227 02:55:54 -- nvmf/common.sh@47 -- # : 0 00:11:15.227 02:55:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.227 02:55:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.227 02:55:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.227 02:55:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.227 02:55:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.227 02:55:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.227 02:55:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.227 02:55:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.227 02:55:54 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.227 02:55:54 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.227 02:55:54 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:15.227 02:55:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:15.227 02:55:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.227 02:55:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:15.227 02:55:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:15.227 02:55:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:15.227 02:55:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.227 02:55:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:15.227 02:55:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.227 02:55:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:15.227 02:55:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:15.227 02:55:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:15.227 02:55:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:15.227 02:55:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:15.227 02:55:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:15.227 02:55:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.227 02:55:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.227 02:55:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:15.227 02:55:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:15.227 02:55:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:15.227 02:55:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:15.227 02:55:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:15.227 02:55:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.227 02:55:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:15.227 02:55:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:15.227 02:55:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:15.227 02:55:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:15.227 02:55:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:15.227 02:55:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:15.486 Cannot find device "nvmf_tgt_br" 00:11:15.486 02:55:54 -- nvmf/common.sh@155 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:15.486 Cannot find device "nvmf_tgt_br2" 00:11:15.486 02:55:54 -- nvmf/common.sh@156 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:15.486 02:55:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:15.486 Cannot find device "nvmf_tgt_br" 00:11:15.486 02:55:54 -- nvmf/common.sh@158 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:15.486 Cannot find device "nvmf_tgt_br2" 00:11:15.486 02:55:54 -- nvmf/common.sh@159 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:15.486 02:55:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:15.486 02:55:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:15.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.486 02:55:54 -- nvmf/common.sh@162 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:15.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:15.486 02:55:54 -- nvmf/common.sh@163 -- # true 00:11:15.486 02:55:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:15.486 02:55:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:15.486 02:55:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:15.486 02:55:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:15.486 02:55:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:15.486 02:55:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:15.486 02:55:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:15.486 02:55:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:15.486 02:55:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:15.486 02:55:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:15.486 02:55:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:15.486 02:55:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:15.486 02:55:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:15.486 02:55:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:15.486 02:55:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:15.486 02:55:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:15.486 02:55:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:15.486 02:55:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:15.486 02:55:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:15.486 02:55:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:15.486 02:55:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:15.486 02:55:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:15.486 02:55:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:15.486 02:55:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:15.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:11:15.486 00:11:15.486 --- 10.0.0.2 ping statistics --- 00:11:15.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.486 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:15.744 02:55:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:15.744 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:15.744 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:11:15.744 00:11:15.744 --- 10.0.0.3 ping statistics --- 00:11:15.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.744 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:11:15.744 02:55:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:15.744 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.744 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:11:15.744 00:11:15.744 --- 10.0.0.1 ping statistics --- 00:11:15.744 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.744 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:11:15.744 02:55:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.744 02:55:54 -- nvmf/common.sh@422 -- # return 0 00:11:15.744 02:55:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:15.744 02:55:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.744 02:55:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:15.744 02:55:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:15.744 02:55:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.744 02:55:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:15.744 02:55:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:15.744 02:55:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:15.744 02:55:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:15.744 02:55:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:15.744 02:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.744 02:55:54 -- nvmf/common.sh@470 -- # nvmfpid=82760 00:11:15.744 02:55:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:15.744 02:55:54 -- nvmf/common.sh@471 -- # waitforlisten 82760 00:11:15.744 02:55:54 -- common/autotest_common.sh@817 -- # '[' -z 82760 ']' 00:11:15.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.744 02:55:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.744 02:55:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:15.744 02:55:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.744 02:55:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:15.744 02:55:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.744 [2024-04-23 02:55:54.721379] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:15.744 [2024-04-23 02:55:54.721466] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:15.744 [2024-04-23 02:55:54.854865] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:15.744 [2024-04-23 02:55:54.857004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:16.002 [2024-04-23 02:55:54.921526] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:16.002 [2024-04-23 02:55:54.921591] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:16.002 [2024-04-23 02:55:54.921617] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:16.002 [2024-04-23 02:55:54.921624] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:16.002 [2024-04-23 02:55:54.921630] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:16.002 [2024-04-23 02:55:54.921800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:16.002 [2024-04-23 02:55:54.921965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:16.002 [2024-04-23 02:55:54.922073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:16.002 [2024-04-23 02:55:54.922080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:16.569 02:55:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:16.569 02:55:55 -- common/autotest_common.sh@850 -- # return 0 00:11:16.569 02:55:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:16.569 02:55:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 02:55:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.569 02:55:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:16.569 02:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 [2024-04-23 02:55:55.674710] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:16.569 02:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.569 02:55:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:16.569 02:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 Malloc0 00:11:16.569 02:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.569 02:55:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:16.569 02:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 02:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.569 02:55:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:16.569 02:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 02:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.569 02:55:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:16.569 02:55:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:16.569 02:55:55 -- common/autotest_common.sh@10 -- # set +x 00:11:16.569 [2024-04-23 02:55:55.718878] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:16.569 02:55:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:16.569 02:55:55 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:16.569 02:55:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:16.569 02:55:55 -- nvmf/common.sh@521 -- # config=() 00:11:16.569 02:55:55 -- nvmf/common.sh@521 -- # local subsystem config 00:11:16.569 02:55:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:11:16.569 02:55:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:11:16.569 { 00:11:16.569 "params": { 00:11:16.569 "name": "Nvme$subsystem", 00:11:16.569 "trtype": "$TEST_TRANSPORT", 00:11:16.569 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:16.569 "adrfam": "ipv4", 00:11:16.569 "trsvcid": "$NVMF_PORT", 00:11:16.569 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:16.569 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:16.569 "hdgst": ${hdgst:-false}, 00:11:16.569 "ddgst": ${ddgst:-false} 00:11:16.569 }, 00:11:16.569 "method": "bdev_nvme_attach_controller" 00:11:16.569 } 00:11:16.569 EOF 00:11:16.569 )") 00:11:16.839 02:55:55 -- nvmf/common.sh@543 -- # cat 00:11:16.839 02:55:55 -- nvmf/common.sh@545 -- # jq . 00:11:16.839 02:55:55 -- nvmf/common.sh@546 -- # IFS=, 00:11:16.839 02:55:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:11:16.839 "params": { 00:11:16.839 "name": "Nvme1", 00:11:16.839 "trtype": "tcp", 00:11:16.839 "traddr": "10.0.0.2", 00:11:16.839 "adrfam": "ipv4", 00:11:16.839 "trsvcid": "4420", 00:11:16.839 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:16.839 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:16.839 "hdgst": false, 00:11:16.839 "ddgst": false 00:11:16.839 }, 00:11:16.839 "method": "bdev_nvme_attach_controller" 00:11:16.839 }' 00:11:16.839 [2024-04-23 02:55:55.775336] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:16.839 [2024-04-23 02:55:55.775444] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82796 ] 00:11:16.839 [2024-04-23 02:55:55.912232] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:16.839 [2024-04-23 02:55:55.915534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:17.111 [2024-04-23 02:55:56.019115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.111 [2024-04-23 02:55:56.019215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.111 [2024-04-23 02:55:56.019222] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.111 I/O targets: 00:11:17.111 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:17.111 00:11:17.111 00:11:17.111 CUnit - A unit testing framework for C - Version 2.1-3 00:11:17.111 http://cunit.sourceforge.net/ 00:11:17.111 00:11:17.111 00:11:17.111 Suite: bdevio tests on: Nvme1n1 00:11:17.111 Test: blockdev write read block ...passed 00:11:17.111 Test: blockdev write zeroes read block ...passed 00:11:17.111 Test: blockdev write zeroes read no split ...passed 00:11:17.111 Test: blockdev write zeroes read split ...passed 00:11:17.111 Test: blockdev write zeroes read split partial ...passed 00:11:17.111 Test: blockdev reset ...[2024-04-23 02:55:56.228210] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:17.111 [2024-04-23 02:55:56.228330] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fc6f30 (9): Bad file descriptor 00:11:17.111 [2024-04-23 02:55:56.245943] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:17.111 passed 00:11:17.111 Test: blockdev write read 8 blocks ...passed 00:11:17.111 Test: blockdev write read size > 128k ...passed 00:11:17.111 Test: blockdev write read invalid size ...passed 00:11:17.111 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:17.111 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:17.111 Test: blockdev write read max offset ...passed 00:11:17.111 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:17.111 Test: blockdev writev readv 8 blocks ...passed 00:11:17.111 Test: blockdev writev readv 30 x 1block ...passed 00:11:17.111 Test: blockdev writev readv block ...passed 00:11:17.111 Test: blockdev writev readv size > 128k ...passed 00:11:17.111 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:17.111 Test: blockdev comparev and writev ...[2024-04-23 02:55:56.255864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.255911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.255933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.255945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.256440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.256480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.256509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.256522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.256948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.256983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.257003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.257015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.257358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.257390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.257409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:17.111 [2024-04-23 02:55:56.257421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:17.111 passed 00:11:17.111 Test: blockdev nvme passthru rw ...passed 00:11:17.111 Test: blockdev nvme passthru vendor specific ...[2024-04-23 02:55:56.258743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.111 [2024-04-23 02:55:56.258778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:17.111 [2024-04-23 02:55:56.259208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.111 [2024-04-23 02:55:56.259242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:17.112 [2024-04-23 02:55:56.259469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.112 [2024-04-23 02:55:56.259596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:17.112 [2024-04-23 02:55:56.259832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:17.112 [2024-04-23 02:55:56.259924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:17.112 passed 00:11:17.370 Test: blockdev nvme admin passthru ...passed 00:11:17.370 Test: blockdev copy ...passed 00:11:17.370 00:11:17.370 Run Summary: Type Total Ran Passed Failed Inactive 00:11:17.370 suites 1 1 n/a 0 0 00:11:17.370 tests 23 23 23 0 0 00:11:17.370 asserts 152 152 152 0 n/a 00:11:17.370 00:11:17.370 Elapsed time = 0.180 seconds 00:11:17.628 02:55:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:17.628 02:55:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:17.628 02:55:56 -- common/autotest_common.sh@10 -- # set +x 00:11:17.628 02:55:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:17.628 02:55:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:17.628 02:55:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:17.628 02:55:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:17.628 02:55:56 -- nvmf/common.sh@117 -- # sync 00:11:17.628 02:55:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:17.628 02:55:56 -- nvmf/common.sh@120 -- # set +e 00:11:17.628 02:55:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:17.628 02:55:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:17.628 rmmod nvme_tcp 00:11:17.628 rmmod nvme_fabrics 00:11:17.628 rmmod nvme_keyring 00:11:17.628 02:55:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:17.628 02:55:56 -- nvmf/common.sh@124 -- # set -e 00:11:17.628 02:55:56 -- nvmf/common.sh@125 -- # return 0 00:11:17.628 02:55:56 -- nvmf/common.sh@478 -- # '[' -n 82760 ']' 00:11:17.628 02:55:56 -- nvmf/common.sh@479 -- # killprocess 82760 00:11:17.628 02:55:56 -- common/autotest_common.sh@936 -- # '[' -z 82760 ']' 00:11:17.628 02:55:56 -- common/autotest_common.sh@940 -- # kill -0 82760 00:11:17.628 02:55:56 -- common/autotest_common.sh@941 -- # uname 00:11:17.628 02:55:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:17.628 02:55:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82760 00:11:17.628 02:55:56 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:11:17.628 02:55:56 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:11:17.628 killing process with pid 82760 00:11:17.628 02:55:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82760' 00:11:17.628 02:55:56 -- common/autotest_common.sh@955 -- # kill 82760 00:11:17.628 02:55:56 -- common/autotest_common.sh@960 -- # wait 82760 00:11:18.196 02:55:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:18.196 02:55:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:18.196 02:55:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:18.196 02:55:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:18.196 02:55:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:18.196 02:55:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.196 02:55:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.196 02:55:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.196 02:55:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:11:18.196 ************************************ 00:11:18.196 END TEST nvmf_bdevio_no_huge 00:11:18.196 ************************************ 00:11:18.196 00:11:18.196 real 0m2.839s 00:11:18.196 user 0m9.496s 00:11:18.196 sys 0m1.090s 00:11:18.196 02:55:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.196 02:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.196 02:55:57 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:18.196 02:55:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:18.196 02:55:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:18.196 02:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.196 ************************************ 00:11:18.196 START TEST nvmf_tls 00:11:18.196 ************************************ 00:11:18.196 02:55:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:18.196 * Looking for test storage... 00:11:18.196 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:18.196 02:55:57 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:18.196 02:55:57 -- nvmf/common.sh@7 -- # uname -s 00:11:18.196 02:55:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.196 02:55:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.196 02:55:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.196 02:55:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.196 02:55:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.196 02:55:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.196 02:55:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.196 02:55:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.196 02:55:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.196 02:55:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.196 02:55:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:18.196 02:55:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:11:18.196 02:55:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.196 02:55:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.196 02:55:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:18.196 02:55:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.196 02:55:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:18.196 02:55:57 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.196 02:55:57 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.196 02:55:57 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.196 02:55:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.197 02:55:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.197 02:55:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.197 02:55:57 -- paths/export.sh@5 -- # export PATH 00:11:18.197 02:55:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.197 02:55:57 -- nvmf/common.sh@47 -- # : 0 00:11:18.197 02:55:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.197 02:55:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.197 02:55:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.197 02:55:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.197 02:55:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.197 02:55:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.197 02:55:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.197 02:55:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.197 02:55:57 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:18.197 02:55:57 -- target/tls.sh@62 -- # nvmftestinit 00:11:18.197 02:55:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:18.197 02:55:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.197 02:55:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:18.197 02:55:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:18.197 02:55:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:18.197 02:55:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.197 02:55:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:18.197 02:55:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.197 02:55:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:11:18.197 02:55:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:11:18.197 02:55:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:11:18.197 02:55:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:11:18.197 02:55:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:11:18.197 02:55:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:11:18.197 02:55:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.197 02:55:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.197 02:55:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:18.197 02:55:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:11:18.197 02:55:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:18.197 02:55:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:18.197 02:55:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:18.197 02:55:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.197 02:55:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:18.197 02:55:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:18.197 02:55:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:18.197 02:55:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:18.197 02:55:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:11:18.197 02:55:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:11:18.197 Cannot find device "nvmf_tgt_br" 00:11:18.197 02:55:57 -- nvmf/common.sh@155 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:11:18.455 Cannot find device "nvmf_tgt_br2" 00:11:18.455 02:55:57 -- nvmf/common.sh@156 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:11:18.455 02:55:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:11:18.455 Cannot find device "nvmf_tgt_br" 00:11:18.455 02:55:57 -- nvmf/common.sh@158 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:11:18.455 Cannot find device "nvmf_tgt_br2" 00:11:18.455 02:55:57 -- nvmf/common.sh@159 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:11:18.455 02:55:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:11:18.455 02:55:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:18.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.455 02:55:57 -- nvmf/common.sh@162 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:18.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:18.455 02:55:57 -- nvmf/common.sh@163 -- # true 00:11:18.455 02:55:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:11:18.455 02:55:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:18.455 02:55:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:18.455 02:55:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:18.455 02:55:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:18.455 02:55:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:18.456 02:55:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:18.456 02:55:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:18.456 02:55:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:18.456 02:55:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:11:18.456 02:55:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:11:18.456 02:55:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:11:18.456 02:55:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:11:18.456 02:55:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:18.456 02:55:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:18.456 02:55:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:18.456 02:55:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:11:18.456 02:55:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:11:18.456 02:55:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:11:18.456 02:55:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:18.456 02:55:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:18.714 02:55:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:18.714 02:55:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:18.714 02:55:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:11:18.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:11:18.714 00:11:18.714 --- 10.0.0.2 ping statistics --- 00:11:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.714 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:18.714 02:55:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:11:18.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:18.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:11:18.714 00:11:18.714 --- 10.0.0.3 ping statistics --- 00:11:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.714 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:11:18.714 02:55:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:18.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:11:18.714 00:11:18.714 --- 10.0.0.1 ping statistics --- 00:11:18.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.714 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:11:18.714 02:55:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.714 02:55:57 -- nvmf/common.sh@422 -- # return 0 00:11:18.714 02:55:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:18.714 02:55:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.714 02:55:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:18.714 02:55:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:18.714 02:55:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.714 02:55:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:18.714 02:55:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:18.714 02:55:57 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:18.714 02:55:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:18.714 02:55:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:18.714 02:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.714 02:55:57 -- nvmf/common.sh@470 -- # nvmfpid=82981 00:11:18.714 02:55:57 -- nvmf/common.sh@471 -- # waitforlisten 82981 00:11:18.714 02:55:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:18.714 02:55:57 -- common/autotest_common.sh@817 -- # '[' -z 82981 ']' 00:11:18.714 02:55:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.714 02:55:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:18.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.714 02:55:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.714 02:55:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:18.714 02:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.714 [2024-04-23 02:55:57.709251] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:18.714 [2024-04-23 02:55:57.709369] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.715 [2024-04-23 02:55:57.829028] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:18.715 [2024-04-23 02:55:57.846772] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:18.973 [2024-04-23 02:55:57.894553] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.973 [2024-04-23 02:55:57.894645] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.973 [2024-04-23 02:55:57.894672] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.973 [2024-04-23 02:55:57.894690] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.973 [2024-04-23 02:55:57.894708] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.973 [2024-04-23 02:55:57.894755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.973 02:55:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:18.973 02:55:57 -- common/autotest_common.sh@850 -- # return 0 00:11:18.973 02:55:57 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:18.973 02:55:57 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:18.973 02:55:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.973 02:55:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.973 02:55:58 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:11:18.973 02:55:58 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:19.232 true 00:11:19.232 02:55:58 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:19.232 02:55:58 -- target/tls.sh@73 -- # jq -r .tls_version 00:11:19.491 02:55:58 -- target/tls.sh@73 -- # version=0 00:11:19.491 02:55:58 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:11:19.491 02:55:58 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:19.750 02:55:58 -- target/tls.sh@81 -- # jq -r .tls_version 00:11:19.750 02:55:58 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:20.008 02:55:59 -- target/tls.sh@81 -- # version=13 00:11:20.008 02:55:59 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:11:20.008 02:55:59 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:20.297 02:55:59 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:20.297 02:55:59 -- target/tls.sh@89 -- # jq -r .tls_version 00:11:20.555 02:55:59 -- target/tls.sh@89 -- # version=7 00:11:20.555 02:55:59 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:11:20.555 02:55:59 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:20.555 02:55:59 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:11:20.812 02:55:59 -- target/tls.sh@96 -- # ktls=false 00:11:20.812 02:55:59 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:11:20.812 02:55:59 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:21.070 02:56:00 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:21.070 02:56:00 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:11:21.327 02:56:00 -- target/tls.sh@104 -- # ktls=true 00:11:21.327 02:56:00 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:11:21.327 02:56:00 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:21.585 02:56:00 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:11:21.585 02:56:00 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:21.843 02:56:00 -- target/tls.sh@112 -- # ktls=false 00:11:21.844 02:56:00 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:11:21.844 02:56:00 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:11:21.844 02:56:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:11:21.844 02:56:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # digest=1 00:11:21.844 02:56:00 -- nvmf/common.sh@694 -- # python - 00:11:21.844 02:56:00 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:21.844 02:56:00 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:11:21.844 02:56:00 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:11:21.844 02:56:00 -- nvmf/common.sh@691 -- # local prefix key digest 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:11:21.844 02:56:00 -- nvmf/common.sh@693 -- # digest=1 00:11:21.844 02:56:00 -- nvmf/common.sh@694 -- # python - 00:11:21.844 02:56:00 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:21.844 02:56:00 -- target/tls.sh@121 -- # mktemp 00:11:21.844 02:56:00 -- target/tls.sh@121 -- # key_path=/tmp/tmp.YPMR9AQ6U7 00:11:21.844 02:56:00 -- target/tls.sh@122 -- # mktemp 00:11:21.844 02:56:00 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.AceOazIRZz 00:11:21.844 02:56:00 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:21.844 02:56:00 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:21.844 02:56:00 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.YPMR9AQ6U7 00:11:21.844 02:56:00 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.AceOazIRZz 00:11:21.844 02:56:00 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:22.102 02:56:01 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:22.361 02:56:01 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.YPMR9AQ6U7 00:11:22.361 02:56:01 -- target/tls.sh@49 -- # local key=/tmp/tmp.YPMR9AQ6U7 00:11:22.361 02:56:01 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:22.619 [2024-04-23 02:56:01.753163] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:22.619 02:56:01 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:22.879 02:56:01 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:23.138 [2024-04-23 02:56:02.197220] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:23.138 [2024-04-23 02:56:02.197520] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:23.138 02:56:02 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:23.396 malloc0 00:11:23.396 02:56:02 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:23.654 02:56:02 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YPMR9AQ6U7 00:11:23.912 [2024-04-23 02:56:02.875489] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:23.912 02:56:02 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.YPMR9AQ6U7 00:11:33.940 Initializing NVMe Controllers 00:11:33.940 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:33.940 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:33.940 Initialization complete. Launching workers. 00:11:33.940 ======================================================== 00:11:33.940 Latency(us) 00:11:33.940 Device Information : IOPS MiB/s Average min max 00:11:33.940 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9913.69 38.73 6457.02 1366.63 9077.70 00:11:33.940 ======================================================== 00:11:33.940 Total : 9913.69 38.73 6457.02 1366.63 9077.70 00:11:33.940 00:11:33.940 02:56:13 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.YPMR9AQ6U7 00:11:33.940 02:56:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:33.940 02:56:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:33.940 02:56:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:33.940 02:56:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YPMR9AQ6U7' 00:11:33.940 02:56:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:33.940 02:56:13 -- target/tls.sh@28 -- # bdevperf_pid=83204 00:11:33.940 02:56:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:33.940 02:56:13 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:33.940 02:56:13 -- target/tls.sh@31 -- # waitforlisten 83204 /var/tmp/bdevperf.sock 00:11:33.940 02:56:13 -- common/autotest_common.sh@817 -- # '[' -z 83204 ']' 00:11:33.940 02:56:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:33.940 02:56:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:33.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:33.940 02:56:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:33.940 02:56:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:33.940 02:56:13 -- common/autotest_common.sh@10 -- # set +x 00:11:34.199 [2024-04-23 02:56:13.136295] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:34.199 [2024-04-23 02:56:13.136390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83204 ] 00:11:34.199 [2024-04-23 02:56:13.258045] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:34.199 [2024-04-23 02:56:13.279504] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.199 [2024-04-23 02:56:13.320087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.458 02:56:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:34.458 02:56:13 -- common/autotest_common.sh@850 -- # return 0 00:11:34.458 02:56:13 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YPMR9AQ6U7 00:11:34.458 [2024-04-23 02:56:13.579421] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:34.458 [2024-04-23 02:56:13.579545] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:34.716 TLSTESTn1 00:11:34.716 02:56:13 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:34.716 Running I/O for 10 seconds... 00:11:44.689 00:11:44.689 Latency(us) 00:11:44.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.689 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:44.689 Verification LBA range: start 0x0 length 0x2000 00:11:44.689 TLSTESTn1 : 10.03 4110.63 16.06 0.00 0.00 31071.06 7685.59 21805.61 00:11:44.689 =================================================================================================================== 00:11:44.689 Total : 4110.63 16.06 0.00 0.00 31071.06 7685.59 21805.61 00:11:44.689 0 00:11:44.689 02:56:23 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:44.689 02:56:23 -- target/tls.sh@45 -- # killprocess 83204 00:11:44.689 02:56:23 -- common/autotest_common.sh@936 -- # '[' -z 83204 ']' 00:11:44.689 02:56:23 -- common/autotest_common.sh@940 -- # kill -0 83204 00:11:44.689 02:56:23 -- common/autotest_common.sh@941 -- # uname 00:11:44.689 02:56:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:44.689 02:56:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83204 00:11:44.948 killing process with pid 83204 00:11:44.948 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.948 00:11:44.948 Latency(us) 00:11:44.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.948 =================================================================================================================== 00:11:44.948 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:44.948 02:56:23 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:44.948 02:56:23 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:44.948 02:56:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83204' 00:11:44.948 02:56:23 -- common/autotest_common.sh@955 -- # kill 83204 00:11:44.948 [2024-04-23 02:56:23.864524] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:44.948 02:56:23 -- common/autotest_common.sh@960 -- # wait 83204 00:11:44.948 02:56:24 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AceOazIRZz 00:11:44.948 02:56:24 -- common/autotest_common.sh@638 -- # local es=0 00:11:44.948 02:56:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AceOazIRZz 00:11:44.948 02:56:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:11:44.948 02:56:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.948 02:56:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:11:44.948 02:56:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:44.948 02:56:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.AceOazIRZz 00:11:44.948 02:56:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:44.948 02:56:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:44.948 02:56:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:44.948 02:56:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.AceOazIRZz' 00:11:44.948 02:56:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:44.948 02:56:24 -- target/tls.sh@28 -- # bdevperf_pid=83330 00:11:44.948 02:56:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:44.948 02:56:24 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:44.948 02:56:24 -- target/tls.sh@31 -- # waitforlisten 83330 /var/tmp/bdevperf.sock 00:11:44.948 02:56:24 -- common/autotest_common.sh@817 -- # '[' -z 83330 ']' 00:11:44.948 02:56:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.948 02:56:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:44.948 02:56:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.948 02:56:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:44.948 02:56:24 -- common/autotest_common.sh@10 -- # set +x 00:11:44.948 [2024-04-23 02:56:24.078186] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:44.948 [2024-04-23 02:56:24.079080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83330 ] 00:11:45.206 [2024-04-23 02:56:24.205682] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:45.206 [2024-04-23 02:56:24.217172] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.206 [2024-04-23 02:56:24.251380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.141 02:56:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:46.141 02:56:25 -- common/autotest_common.sh@850 -- # return 0 00:11:46.141 02:56:25 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.AceOazIRZz 00:11:46.141 [2024-04-23 02:56:25.211758] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:46.141 [2024-04-23 02:56:25.211879] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:46.141 [2024-04-23 02:56:25.223555] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:46.141 [2024-04-23 02:56:25.223852] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037ae0 (107): Transport endpoint is not connected 00:11:46.141 [2024-04-23 02:56:25.224844] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037ae0 (9): Bad file descriptor 00:11:46.141 [2024-04-23 02:56:25.225840] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:46.141 [2024-04-23 02:56:25.225874] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:46.141 [2024-04-23 02:56:25.225907] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:46.141 request: 00:11:46.141 { 00:11:46.141 "name": "TLSTEST", 00:11:46.141 "trtype": "tcp", 00:11:46.141 "traddr": "10.0.0.2", 00:11:46.141 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:46.141 "adrfam": "ipv4", 00:11:46.141 "trsvcid": "4420", 00:11:46.141 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:46.141 "psk": "/tmp/tmp.AceOazIRZz", 00:11:46.141 "method": "bdev_nvme_attach_controller", 00:11:46.141 "req_id": 1 00:11:46.141 } 00:11:46.141 Got JSON-RPC error response 00:11:46.141 response: 00:11:46.141 { 00:11:46.141 "code": -32602, 00:11:46.141 "message": "Invalid parameters" 00:11:46.141 } 00:11:46.141 02:56:25 -- target/tls.sh@36 -- # killprocess 83330 00:11:46.141 02:56:25 -- common/autotest_common.sh@936 -- # '[' -z 83330 ']' 00:11:46.141 02:56:25 -- common/autotest_common.sh@940 -- # kill -0 83330 00:11:46.142 02:56:25 -- common/autotest_common.sh@941 -- # uname 00:11:46.142 02:56:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:46.142 02:56:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83330 00:11:46.142 killing process with pid 83330 00:11:46.142 Received shutdown signal, test time was about 10.000000 seconds 00:11:46.142 00:11:46.142 Latency(us) 00:11:46.142 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:46.142 =================================================================================================================== 00:11:46.142 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:46.142 02:56:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:46.142 02:56:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:46.142 02:56:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83330' 00:11:46.142 02:56:25 -- common/autotest_common.sh@955 -- # kill 83330 00:11:46.142 [2024-04-23 02:56:25.269927] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:46.142 02:56:25 -- common/autotest_common.sh@960 -- # wait 83330 00:11:46.401 02:56:25 -- target/tls.sh@37 -- # return 1 00:11:46.401 02:56:25 -- common/autotest_common.sh@641 -- # es=1 00:11:46.401 02:56:25 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:46.401 02:56:25 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:46.401 02:56:25 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:46.401 02:56:25 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YPMR9AQ6U7 00:11:46.401 02:56:25 -- common/autotest_common.sh@638 -- # local es=0 00:11:46.401 02:56:25 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YPMR9AQ6U7 00:11:46.401 02:56:25 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:11:46.401 02:56:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.401 02:56:25 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:11:46.401 02:56:25 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:46.401 02:56:25 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.YPMR9AQ6U7 00:11:46.401 02:56:25 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:46.401 02:56:25 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:46.401 02:56:25 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:46.401 02:56:25 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YPMR9AQ6U7' 00:11:46.401 02:56:25 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:46.401 02:56:25 -- target/tls.sh@28 -- # bdevperf_pid=83352 00:11:46.401 02:56:25 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:46.401 02:56:25 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:46.401 02:56:25 -- target/tls.sh@31 -- # waitforlisten 83352 /var/tmp/bdevperf.sock 00:11:46.401 02:56:25 -- common/autotest_common.sh@817 -- # '[' -z 83352 ']' 00:11:46.401 02:56:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:46.401 02:56:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:46.401 02:56:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:46.401 02:56:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:46.401 02:56:25 -- common/autotest_common.sh@10 -- # set +x 00:11:46.401 [2024-04-23 02:56:25.461528] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:46.401 [2024-04-23 02:56:25.461825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83352 ] 00:11:46.660 [2024-04-23 02:56:25.585693] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:46.660 [2024-04-23 02:56:25.598740] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:46.660 [2024-04-23 02:56:25.634452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.227 02:56:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:47.227 02:56:26 -- common/autotest_common.sh@850 -- # return 0 00:11:47.227 02:56:26 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.YPMR9AQ6U7 00:11:47.486 [2024-04-23 02:56:26.567898] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:47.486 [2024-04-23 02:56:26.568030] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:47.486 [2024-04-23 02:56:26.573614] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:47.486 [2024-04-23 02:56:26.573659] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:47.486 [2024-04-23 02:56:26.573714] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:47.486 [2024-04-23 02:56:26.573824] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd23ae0 (107): Transport endpoint is not connected 00:11:47.486 [2024-04-23 02:56:26.574810] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd23ae0 (9): Bad file descriptor 00:11:47.486 [2024-04-23 02:56:26.575808] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:47.486 [2024-04-23 02:56:26.575840] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:47.486 [2024-04-23 02:56:26.575872] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:47.486 request: 00:11:47.486 { 00:11:47.486 "name": "TLSTEST", 00:11:47.486 "trtype": "tcp", 00:11:47.486 "traddr": "10.0.0.2", 00:11:47.486 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:47.486 "adrfam": "ipv4", 00:11:47.486 "trsvcid": "4420", 00:11:47.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.486 "psk": "/tmp/tmp.YPMR9AQ6U7", 00:11:47.486 "method": "bdev_nvme_attach_controller", 00:11:47.486 "req_id": 1 00:11:47.486 } 00:11:47.486 Got JSON-RPC error response 00:11:47.486 response: 00:11:47.486 { 00:11:47.486 "code": -32602, 00:11:47.486 "message": "Invalid parameters" 00:11:47.486 } 00:11:47.486 02:56:26 -- target/tls.sh@36 -- # killprocess 83352 00:11:47.486 02:56:26 -- common/autotest_common.sh@936 -- # '[' -z 83352 ']' 00:11:47.486 02:56:26 -- common/autotest_common.sh@940 -- # kill -0 83352 00:11:47.486 02:56:26 -- common/autotest_common.sh@941 -- # uname 00:11:47.486 02:56:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.486 02:56:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83352 00:11:47.486 killing process with pid 83352 00:11:47.486 Received shutdown signal, test time was about 10.000000 seconds 00:11:47.486 00:11:47.486 Latency(us) 00:11:47.486 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.486 =================================================================================================================== 00:11:47.486 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:47.486 02:56:26 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:47.486 02:56:26 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:47.486 02:56:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83352' 00:11:47.486 02:56:26 -- common/autotest_common.sh@955 -- # kill 83352 00:11:47.486 [2024-04-23 02:56:26.632182] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:47.486 02:56:26 -- common/autotest_common.sh@960 -- # wait 83352 00:11:47.770 02:56:26 -- target/tls.sh@37 -- # return 1 00:11:47.770 02:56:26 -- common/autotest_common.sh@641 -- # es=1 00:11:47.770 02:56:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:47.770 02:56:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:47.770 02:56:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:47.770 02:56:26 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YPMR9AQ6U7 00:11:47.770 02:56:26 -- common/autotest_common.sh@638 -- # local es=0 00:11:47.770 02:56:26 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YPMR9AQ6U7 00:11:47.770 02:56:26 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:11:47.770 02:56:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:47.770 02:56:26 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:11:47.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.770 02:56:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:47.770 02:56:26 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.YPMR9AQ6U7 00:11:47.770 02:56:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:47.770 02:56:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:47.770 02:56:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:47.770 02:56:26 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.YPMR9AQ6U7' 00:11:47.770 02:56:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:47.770 02:56:26 -- target/tls.sh@28 -- # bdevperf_pid=83380 00:11:47.770 02:56:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:47.770 02:56:26 -- target/tls.sh@31 -- # waitforlisten 83380 /var/tmp/bdevperf.sock 00:11:47.770 02:56:26 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:47.770 02:56:26 -- common/autotest_common.sh@817 -- # '[' -z 83380 ']' 00:11:47.770 02:56:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.770 02:56:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:47.770 02:56:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.770 02:56:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:47.770 02:56:26 -- common/autotest_common.sh@10 -- # set +x 00:11:47.770 [2024-04-23 02:56:26.829714] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:47.770 [2024-04-23 02:56:26.830054] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83380 ] 00:11:48.029 [2024-04-23 02:56:26.951739] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:48.029 [2024-04-23 02:56:26.969678] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.029 [2024-04-23 02:56:27.005513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.964 02:56:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:48.964 02:56:27 -- common/autotest_common.sh@850 -- # return 0 00:11:48.964 02:56:27 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.YPMR9AQ6U7 00:11:48.964 [2024-04-23 02:56:28.006273] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:48.964 [2024-04-23 02:56:28.007086] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:48.964 [2024-04-23 02:56:28.013874] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:48.964 [2024-04-23 02:56:28.014121] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:48.964 [2024-04-23 02:56:28.014377] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:48.964 [2024-04-23 02:56:28.014724] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3fae0 (107): Transport endpoint is not connected 00:11:48.964 [2024-04-23 02:56:28.015706] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3fae0 (9): Bad file descriptor 00:11:48.964 [2024-04-23 02:56:28.016701] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:48.964 [2024-04-23 02:56:28.016731] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:48.964 [2024-04-23 02:56:28.016764] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:48.964 request: 00:11:48.964 { 00:11:48.964 "name": "TLSTEST", 00:11:48.964 "trtype": "tcp", 00:11:48.964 "traddr": "10.0.0.2", 00:11:48.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:48.964 "adrfam": "ipv4", 00:11:48.964 "trsvcid": "4420", 00:11:48.964 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:48.964 "psk": "/tmp/tmp.YPMR9AQ6U7", 00:11:48.964 "method": "bdev_nvme_attach_controller", 00:11:48.964 "req_id": 1 00:11:48.964 } 00:11:48.964 Got JSON-RPC error response 00:11:48.964 response: 00:11:48.964 { 00:11:48.964 "code": -32602, 00:11:48.964 "message": "Invalid parameters" 00:11:48.964 } 00:11:48.964 02:56:28 -- target/tls.sh@36 -- # killprocess 83380 00:11:48.964 02:56:28 -- common/autotest_common.sh@936 -- # '[' -z 83380 ']' 00:11:48.964 02:56:28 -- common/autotest_common.sh@940 -- # kill -0 83380 00:11:48.964 02:56:28 -- common/autotest_common.sh@941 -- # uname 00:11:48.964 02:56:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:48.964 02:56:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83380 00:11:48.964 killing process with pid 83380 00:11:48.964 Received shutdown signal, test time was about 10.000000 seconds 00:11:48.964 00:11:48.964 Latency(us) 00:11:48.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.964 =================================================================================================================== 00:11:48.964 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:48.964 02:56:28 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:48.964 02:56:28 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:48.964 02:56:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83380' 00:11:48.964 02:56:28 -- common/autotest_common.sh@955 -- # kill 83380 00:11:48.964 [2024-04-23 02:56:28.062156] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:11:48.964 02:56:28 -- common/autotest_common.sh@960 -- # wait 83380 00:11:49.223 02:56:28 -- target/tls.sh@37 -- # return 1 00:11:49.223 02:56:28 -- common/autotest_common.sh@641 -- # es=1 00:11:49.223 02:56:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:49.223 02:56:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:49.223 02:56:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:49.223 02:56:28 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:49.223 02:56:28 -- common/autotest_common.sh@638 -- # local es=0 00:11:49.223 02:56:28 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:49.223 02:56:28 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:11:49.223 02:56:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:49.223 02:56:28 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:11:49.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:49.223 02:56:28 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:11:49.223 02:56:28 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:49.223 02:56:28 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:49.223 02:56:28 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:49.223 02:56:28 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:49.223 02:56:28 -- target/tls.sh@23 -- # psk= 00:11:49.223 02:56:28 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.223 02:56:28 -- target/tls.sh@28 -- # bdevperf_pid=83402 00:11:49.223 02:56:28 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:49.223 02:56:28 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:49.223 02:56:28 -- target/tls.sh@31 -- # waitforlisten 83402 /var/tmp/bdevperf.sock 00:11:49.223 02:56:28 -- common/autotest_common.sh@817 -- # '[' -z 83402 ']' 00:11:49.223 02:56:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:49.223 02:56:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:49.223 02:56:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:49.223 02:56:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:49.223 02:56:28 -- common/autotest_common.sh@10 -- # set +x 00:11:49.223 [2024-04-23 02:56:28.248563] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:49.223 [2024-04-23 02:56:28.249023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83402 ] 00:11:49.223 [2024-04-23 02:56:28.368404] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:49.481 [2024-04-23 02:56:28.383670] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.481 [2024-04-23 02:56:28.421466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.049 02:56:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:50.049 02:56:29 -- common/autotest_common.sh@850 -- # return 0 00:11:50.049 02:56:29 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:50.307 [2024-04-23 02:56:29.365542] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:50.307 [2024-04-23 02:56:29.367478] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1669750 (9): Bad file descriptor 00:11:50.307 [2024-04-23 02:56:29.368475] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:50.307 [2024-04-23 02:56:29.368894] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:50.307 [2024-04-23 02:56:29.369152] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:50.307 request: 00:11:50.307 { 00:11:50.307 "name": "TLSTEST", 00:11:50.307 "trtype": "tcp", 00:11:50.307 "traddr": "10.0.0.2", 00:11:50.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:50.307 "adrfam": "ipv4", 00:11:50.307 "trsvcid": "4420", 00:11:50.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:50.307 "method": "bdev_nvme_attach_controller", 00:11:50.307 "req_id": 1 00:11:50.307 } 00:11:50.307 Got JSON-RPC error response 00:11:50.307 response: 00:11:50.307 { 00:11:50.307 "code": -32602, 00:11:50.307 "message": "Invalid parameters" 00:11:50.307 } 00:11:50.307 02:56:29 -- target/tls.sh@36 -- # killprocess 83402 00:11:50.307 02:56:29 -- common/autotest_common.sh@936 -- # '[' -z 83402 ']' 00:11:50.307 02:56:29 -- common/autotest_common.sh@940 -- # kill -0 83402 00:11:50.307 02:56:29 -- common/autotest_common.sh@941 -- # uname 00:11:50.307 02:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.307 02:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83402 00:11:50.307 killing process with pid 83402 00:11:50.307 Received shutdown signal, test time was about 10.000000 seconds 00:11:50.307 00:11:50.307 Latency(us) 00:11:50.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:50.307 =================================================================================================================== 00:11:50.307 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:50.307 02:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:11:50.307 02:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:11:50.307 02:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83402' 00:11:50.307 02:56:29 -- common/autotest_common.sh@955 -- # kill 83402 00:11:50.307 02:56:29 -- common/autotest_common.sh@960 -- # wait 83402 00:11:50.566 02:56:29 -- target/tls.sh@37 -- # return 1 00:11:50.566 02:56:29 -- common/autotest_common.sh@641 -- # es=1 00:11:50.566 02:56:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:11:50.566 02:56:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:11:50.566 02:56:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:11:50.566 02:56:29 -- target/tls.sh@158 -- # killprocess 82981 00:11:50.566 02:56:29 -- common/autotest_common.sh@936 -- # '[' -z 82981 ']' 00:11:50.566 02:56:29 -- common/autotest_common.sh@940 -- # kill -0 82981 00:11:50.566 02:56:29 -- common/autotest_common.sh@941 -- # uname 00:11:50.566 02:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:50.566 02:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82981 00:11:50.566 killing process with pid 82981 00:11:50.566 02:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:11:50.566 02:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:11:50.566 02:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82981' 00:11:50.566 02:56:29 -- common/autotest_common.sh@955 -- # kill 82981 00:11:50.566 [2024-04-23 02:56:29.577599] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:11:50.566 02:56:29 -- common/autotest_common.sh@960 -- # wait 82981 00:11:50.566 02:56:29 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:11:50.566 02:56:29 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:11:50.566 02:56:29 -- nvmf/common.sh@691 -- # local prefix key digest 00:11:50.566 02:56:29 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:11:50.566 02:56:29 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:50.566 02:56:29 -- nvmf/common.sh@693 -- # digest=2 00:11:50.566 02:56:29 -- nvmf/common.sh@694 -- # python - 00:11:50.824 02:56:29 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:50.824 02:56:29 -- target/tls.sh@160 -- # mktemp 00:11:50.824 02:56:29 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.WyEpbZx3z7 00:11:50.824 02:56:29 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:50.824 02:56:29 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.WyEpbZx3z7 00:11:50.824 02:56:29 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:11:50.824 02:56:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:50.824 02:56:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:50.824 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:11:50.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.824 02:56:29 -- nvmf/common.sh@470 -- # nvmfpid=83445 00:11:50.824 02:56:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:50.824 02:56:29 -- nvmf/common.sh@471 -- # waitforlisten 83445 00:11:50.824 02:56:29 -- common/autotest_common.sh@817 -- # '[' -z 83445 ']' 00:11:50.824 02:56:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.824 02:56:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:50.824 02:56:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.824 02:56:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:50.824 02:56:29 -- common/autotest_common.sh@10 -- # set +x 00:11:50.824 [2024-04-23 02:56:29.820664] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:50.824 [2024-04-23 02:56:29.820930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:50.824 [2024-04-23 02:56:29.937685] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:50.824 [2024-04-23 02:56:29.954025] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:51.082 [2024-04-23 02:56:29.988217] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.082 [2024-04-23 02:56:29.988527] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.082 [2024-04-23 02:56:29.988763] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.082 [2024-04-23 02:56:29.988887] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.082 [2024-04-23 02:56:29.988972] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.082 [2024-04-23 02:56:29.989088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.649 02:56:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:51.649 02:56:30 -- common/autotest_common.sh@850 -- # return 0 00:11:51.649 02:56:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:51.649 02:56:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:51.649 02:56:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.649 02:56:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.649 02:56:30 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:11:51.649 02:56:30 -- target/tls.sh@49 -- # local key=/tmp/tmp.WyEpbZx3z7 00:11:51.649 02:56:30 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:51.908 [2024-04-23 02:56:30.952239] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.908 02:56:30 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:52.167 02:56:31 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:52.426 [2024-04-23 02:56:31.360325] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:52.426 [2024-04-23 02:56:31.360548] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:52.426 02:56:31 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:52.685 malloc0 00:11:52.685 02:56:31 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:52.944 02:56:31 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:11:52.944 [2024-04-23 02:56:32.019802] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:11:52.944 02:56:32 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WyEpbZx3z7 00:11:52.944 02:56:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:52.944 02:56:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:52.944 02:56:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:52.944 02:56:32 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WyEpbZx3z7' 00:11:52.944 02:56:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:52.944 02:56:32 -- target/tls.sh@28 -- # bdevperf_pid=83496 00:11:52.944 02:56:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:52.944 02:56:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:52.944 02:56:32 -- target/tls.sh@31 -- # waitforlisten 83496 /var/tmp/bdevperf.sock 00:11:52.944 02:56:32 -- common/autotest_common.sh@817 -- # '[' -z 83496 ']' 00:11:52.944 02:56:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:52.944 02:56:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:52.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:52.944 02:56:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:52.944 02:56:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:52.944 02:56:32 -- common/autotest_common.sh@10 -- # set +x 00:11:52.944 [2024-04-23 02:56:32.088322] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:11:52.944 [2024-04-23 02:56:32.088418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83496 ] 00:11:53.203 [2024-04-23 02:56:32.210696] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:11:53.203 [2024-04-23 02:56:32.228623] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.203 [2024-04-23 02:56:32.270181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:53.203 02:56:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:53.203 02:56:32 -- common/autotest_common.sh@850 -- # return 0 00:11:53.203 02:56:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:11:53.463 [2024-04-23 02:56:32.533802] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:53.463 [2024-04-23 02:56:32.533954] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:11:53.463 TLSTESTn1 00:11:53.721 02:56:32 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:53.721 Running I/O for 10 seconds... 00:12:03.694 00:12:03.694 Latency(us) 00:12:03.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.694 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:03.694 Verification LBA range: start 0x0 length 0x2000 00:12:03.694 TLSTESTn1 : 10.02 4417.86 17.26 0.00 0.00 28919.43 6255.71 26691.03 00:12:03.694 =================================================================================================================== 00:12:03.694 Total : 4417.86 17.26 0.00 0.00 28919.43 6255.71 26691.03 00:12:03.694 0 00:12:03.694 02:56:42 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:03.694 02:56:42 -- target/tls.sh@45 -- # killprocess 83496 00:12:03.694 02:56:42 -- common/autotest_common.sh@936 -- # '[' -z 83496 ']' 00:12:03.694 02:56:42 -- common/autotest_common.sh@940 -- # kill -0 83496 00:12:03.694 02:56:42 -- common/autotest_common.sh@941 -- # uname 00:12:03.694 02:56:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:03.694 02:56:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83496 00:12:03.694 02:56:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:03.694 02:56:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:03.694 killing process with pid 83496 00:12:03.694 02:56:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83496' 00:12:03.694 02:56:42 -- common/autotest_common.sh@955 -- # kill 83496 00:12:03.694 02:56:42 -- common/autotest_common.sh@960 -- # wait 83496 00:12:03.694 Received shutdown signal, test time was about 10.000000 seconds 00:12:03.694 00:12:03.694 Latency(us) 00:12:03.694 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:03.694 =================================================================================================================== 00:12:03.694 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:03.694 [2024-04-23 02:56:42.769980] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:03.953 02:56:42 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.WyEpbZx3z7 00:12:03.953 02:56:42 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WyEpbZx3z7 00:12:03.953 02:56:42 -- common/autotest_common.sh@638 -- # local es=0 00:12:03.954 02:56:42 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WyEpbZx3z7 00:12:03.954 02:56:42 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:12:03.954 02:56:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.954 02:56:42 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:12:03.954 02:56:42 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:03.954 02:56:42 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.WyEpbZx3z7 00:12:03.954 02:56:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:03.954 02:56:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:03.954 02:56:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:03.954 02:56:42 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.WyEpbZx3z7' 00:12:03.954 02:56:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:03.954 02:56:42 -- target/tls.sh@28 -- # bdevperf_pid=83617 00:12:03.954 02:56:42 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:03.954 02:56:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:03.954 02:56:42 -- target/tls.sh@31 -- # waitforlisten 83617 /var/tmp/bdevperf.sock 00:12:03.954 02:56:42 -- common/autotest_common.sh@817 -- # '[' -z 83617 ']' 00:12:03.954 02:56:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:03.954 02:56:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:03.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:03.954 02:56:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:03.954 02:56:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:03.954 02:56:42 -- common/autotest_common.sh@10 -- # set +x 00:12:03.954 [2024-04-23 02:56:42.968781] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:03.954 [2024-04-23 02:56:42.968879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83617 ] 00:12:03.954 [2024-04-23 02:56:43.090774] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:03.954 [2024-04-23 02:56:43.107682] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.212 [2024-04-23 02:56:43.141306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.212 02:56:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:04.212 02:56:43 -- common/autotest_common.sh@850 -- # return 0 00:12:04.212 02:56:43 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:12:04.471 [2024-04-23 02:56:43.448515] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:04.471 [2024-04-23 02:56:43.448612] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:04.471 [2024-04-23 02:56:43.448621] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.WyEpbZx3z7 00:12:04.471 request: 00:12:04.471 { 00:12:04.471 "name": "TLSTEST", 00:12:04.471 "trtype": "tcp", 00:12:04.471 "traddr": "10.0.0.2", 00:12:04.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.471 "adrfam": "ipv4", 00:12:04.471 "trsvcid": "4420", 00:12:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.471 "psk": "/tmp/tmp.WyEpbZx3z7", 00:12:04.471 "method": "bdev_nvme_attach_controller", 00:12:04.471 "req_id": 1 00:12:04.471 } 00:12:04.471 Got JSON-RPC error response 00:12:04.471 response: 00:12:04.471 { 00:12:04.471 "code": -1, 00:12:04.471 "message": "Operation not permitted" 00:12:04.471 } 00:12:04.471 02:56:43 -- target/tls.sh@36 -- # killprocess 83617 00:12:04.471 02:56:43 -- common/autotest_common.sh@936 -- # '[' -z 83617 ']' 00:12:04.471 02:56:43 -- common/autotest_common.sh@940 -- # kill -0 83617 00:12:04.471 02:56:43 -- common/autotest_common.sh@941 -- # uname 00:12:04.471 02:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.471 02:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83617 00:12:04.471 02:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:04.471 02:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:04.471 killing process with pid 83617 00:12:04.471 02:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83617' 00:12:04.471 02:56:43 -- common/autotest_common.sh@955 -- # kill 83617 00:12:04.471 Received shutdown signal, test time was about 10.000000 seconds 00:12:04.471 00:12:04.471 Latency(us) 00:12:04.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:04.471 =================================================================================================================== 00:12:04.471 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:04.471 02:56:43 -- common/autotest_common.sh@960 -- # wait 83617 00:12:04.471 02:56:43 -- target/tls.sh@37 -- # return 1 00:12:04.471 02:56:43 -- common/autotest_common.sh@641 -- # es=1 00:12:04.471 02:56:43 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:04.471 02:56:43 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:04.471 02:56:43 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:04.471 02:56:43 -- target/tls.sh@174 -- # killprocess 83445 00:12:04.471 02:56:43 -- common/autotest_common.sh@936 -- # '[' -z 83445 ']' 00:12:04.471 02:56:43 -- common/autotest_common.sh@940 -- # kill -0 83445 00:12:04.471 02:56:43 -- common/autotest_common.sh@941 -- # uname 00:12:04.471 02:56:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:04.471 02:56:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83445 00:12:04.730 02:56:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:04.730 02:56:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:04.730 killing process with pid 83445 00:12:04.730 02:56:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83445' 00:12:04.730 02:56:43 -- common/autotest_common.sh@955 -- # kill 83445 00:12:04.730 [2024-04-23 02:56:43.643717] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:04.730 02:56:43 -- common/autotest_common.sh@960 -- # wait 83445 00:12:04.730 02:56:43 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:12:04.730 02:56:43 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:04.730 02:56:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:04.730 02:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:04.730 02:56:43 -- nvmf/common.sh@470 -- # nvmfpid=83641 00:12:04.730 02:56:43 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:04.730 02:56:43 -- nvmf/common.sh@471 -- # waitforlisten 83641 00:12:04.730 02:56:43 -- common/autotest_common.sh@817 -- # '[' -z 83641 ']' 00:12:04.730 02:56:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.730 02:56:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:04.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.730 02:56:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.730 02:56:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:04.730 02:56:43 -- common/autotest_common.sh@10 -- # set +x 00:12:04.730 [2024-04-23 02:56:43.849594] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:04.730 [2024-04-23 02:56:43.849691] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.988 [2024-04-23 02:56:43.972884] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:04.988 [2024-04-23 02:56:43.991065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.988 [2024-04-23 02:56:44.026674] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.988 [2024-04-23 02:56:44.026730] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.988 [2024-04-23 02:56:44.026755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.988 [2024-04-23 02:56:44.026762] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.988 [2024-04-23 02:56:44.026768] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.988 [2024-04-23 02:56:44.026793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.924 02:56:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:05.924 02:56:44 -- common/autotest_common.sh@850 -- # return 0 00:12:05.925 02:56:44 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:05.925 02:56:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:05.925 02:56:44 -- common/autotest_common.sh@10 -- # set +x 00:12:05.925 02:56:44 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.925 02:56:44 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:12:05.925 02:56:44 -- common/autotest_common.sh@638 -- # local es=0 00:12:05.925 02:56:44 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:12:05.925 02:56:44 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:12:05.925 02:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:05.925 02:56:44 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:12:05.925 02:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:05.925 02:56:44 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:12:05.925 02:56:44 -- target/tls.sh@49 -- # local key=/tmp/tmp.WyEpbZx3z7 00:12:05.925 02:56:44 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:05.925 [2024-04-23 02:56:44.962251] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.925 02:56:44 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:06.201 02:56:45 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:06.469 [2024-04-23 02:56:45.346270] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:06.469 [2024-04-23 02:56:45.346502] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.469 02:56:45 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:06.469 malloc0 00:12:06.469 02:56:45 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:06.727 02:56:45 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:12:06.986 [2024-04-23 02:56:45.956210] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:06.986 [2024-04-23 02:56:45.956247] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:06.986 [2024-04-23 02:56:45.956285] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:06.986 request: 00:12:06.986 { 00:12:06.986 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.986 "host": "nqn.2016-06.io.spdk:host1", 00:12:06.986 "psk": "/tmp/tmp.WyEpbZx3z7", 00:12:06.986 "method": "nvmf_subsystem_add_host", 00:12:06.986 "req_id": 1 00:12:06.986 } 00:12:06.986 Got JSON-RPC error response 00:12:06.986 response: 00:12:06.986 { 00:12:06.986 "code": -32603, 00:12:06.986 "message": "Internal error" 00:12:06.986 } 00:12:06.986 02:56:45 -- common/autotest_common.sh@641 -- # es=1 00:12:06.986 02:56:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:06.986 02:56:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:06.986 02:56:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:06.986 02:56:45 -- target/tls.sh@180 -- # killprocess 83641 00:12:06.986 02:56:45 -- common/autotest_common.sh@936 -- # '[' -z 83641 ']' 00:12:06.986 02:56:45 -- common/autotest_common.sh@940 -- # kill -0 83641 00:12:06.986 02:56:45 -- common/autotest_common.sh@941 -- # uname 00:12:06.986 02:56:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.986 02:56:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83641 00:12:06.986 killing process with pid 83641 00:12:06.986 02:56:46 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:06.986 02:56:46 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:06.986 02:56:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83641' 00:12:06.986 02:56:46 -- common/autotest_common.sh@955 -- # kill 83641 00:12:06.986 02:56:46 -- common/autotest_common.sh@960 -- # wait 83641 00:12:06.986 02:56:46 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.WyEpbZx3z7 00:12:07.245 02:56:46 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:07.245 02:56:46 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:07.245 02:56:46 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:07.245 02:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.245 02:56:46 -- nvmf/common.sh@470 -- # nvmfpid=83705 00:12:07.245 02:56:46 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:07.245 02:56:46 -- nvmf/common.sh@471 -- # waitforlisten 83705 00:12:07.245 02:56:46 -- common/autotest_common.sh@817 -- # '[' -z 83705 ']' 00:12:07.245 02:56:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.245 02:56:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:07.245 02:56:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.245 02:56:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:07.245 02:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:07.245 [2024-04-23 02:56:46.193196] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:07.245 [2024-04-23 02:56:46.193439] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.245 [2024-04-23 02:56:46.309254] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:07.245 [2024-04-23 02:56:46.323599] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.245 [2024-04-23 02:56:46.355647] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:07.245 [2024-04-23 02:56:46.355931] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:07.245 [2024-04-23 02:56:46.356145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:07.245 [2024-04-23 02:56:46.356269] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:07.245 [2024-04-23 02:56:46.356353] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:07.245 [2024-04-23 02:56:46.356422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:07.504 02:56:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:07.504 02:56:46 -- common/autotest_common.sh@850 -- # return 0 00:12:07.504 02:56:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:07.504 02:56:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:07.504 02:56:46 -- common/autotest_common.sh@10 -- # set +x 00:12:07.504 02:56:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.504 02:56:46 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:12:07.504 02:56:46 -- target/tls.sh@49 -- # local key=/tmp/tmp.WyEpbZx3z7 00:12:07.504 02:56:46 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:07.764 [2024-04-23 02:56:46.694767] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.764 02:56:46 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:08.023 02:56:46 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:08.023 [2024-04-23 02:56:47.162905] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:08.023 [2024-04-23 02:56:47.163114] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.282 02:56:47 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:08.282 malloc0 00:12:08.282 02:56:47 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:08.540 02:56:47 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:12:08.799 [2024-04-23 02:56:47.824688] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:08.799 02:56:47 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:08.799 02:56:47 -- target/tls.sh@188 -- # bdevperf_pid=83741 00:12:08.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:08.799 02:56:47 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:08.799 02:56:47 -- target/tls.sh@191 -- # waitforlisten 83741 /var/tmp/bdevperf.sock 00:12:08.799 02:56:47 -- common/autotest_common.sh@817 -- # '[' -z 83741 ']' 00:12:08.799 02:56:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:08.799 02:56:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:08.799 02:56:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:08.799 02:56:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:08.799 02:56:47 -- common/autotest_common.sh@10 -- # set +x 00:12:08.799 [2024-04-23 02:56:47.884211] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:08.799 [2024-04-23 02:56:47.884485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83741 ] 00:12:09.058 [2024-04-23 02:56:48.001180] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:09.058 [2024-04-23 02:56:48.019452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.058 [2024-04-23 02:56:48.051823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.058 02:56:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:09.058 02:56:48 -- common/autotest_common.sh@850 -- # return 0 00:12:09.058 02:56:48 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:12:09.316 [2024-04-23 02:56:48.311631] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:09.316 [2024-04-23 02:56:48.311726] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:09.316 TLSTESTn1 00:12:09.316 02:56:48 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:09.575 02:56:48 -- target/tls.sh@196 -- # tgtconf='{ 00:12:09.575 "subsystems": [ 00:12:09.575 { 00:12:09.575 "subsystem": "keyring", 00:12:09.575 "config": [] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "iobuf", 00:12:09.575 "config": [ 00:12:09.575 { 00:12:09.575 "method": "iobuf_set_options", 00:12:09.575 "params": { 00:12:09.575 "small_pool_count": 8192, 00:12:09.575 "large_pool_count": 1024, 00:12:09.575 "small_bufsize": 8192, 00:12:09.575 "large_bufsize": 135168 00:12:09.575 } 00:12:09.575 } 00:12:09.575 ] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "sock", 00:12:09.575 "config": [ 00:12:09.575 { 00:12:09.575 "method": "sock_impl_set_options", 00:12:09.575 "params": { 00:12:09.575 "impl_name": "uring", 00:12:09.575 "recv_buf_size": 2097152, 00:12:09.575 "send_buf_size": 2097152, 00:12:09.575 "enable_recv_pipe": true, 00:12:09.575 "enable_quickack": false, 00:12:09.575 "enable_placement_id": 0, 00:12:09.575 "enable_zerocopy_send_server": false, 00:12:09.575 "enable_zerocopy_send_client": false, 00:12:09.575 "zerocopy_threshold": 0, 00:12:09.575 "tls_version": 0, 00:12:09.575 "enable_ktls": false 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "sock_impl_set_options", 00:12:09.575 "params": { 00:12:09.575 "impl_name": "posix", 00:12:09.575 "recv_buf_size": 2097152, 00:12:09.575 "send_buf_size": 2097152, 00:12:09.575 "enable_recv_pipe": true, 00:12:09.575 "enable_quickack": false, 00:12:09.575 "enable_placement_id": 0, 00:12:09.575 "enable_zerocopy_send_server": true, 00:12:09.575 "enable_zerocopy_send_client": false, 00:12:09.575 "zerocopy_threshold": 0, 00:12:09.575 "tls_version": 0, 00:12:09.575 "enable_ktls": false 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "sock_impl_set_options", 00:12:09.575 "params": { 00:12:09.575 "impl_name": "ssl", 00:12:09.575 "recv_buf_size": 4096, 00:12:09.575 "send_buf_size": 4096, 00:12:09.575 "enable_recv_pipe": true, 00:12:09.575 "enable_quickack": false, 00:12:09.575 "enable_placement_id": 0, 00:12:09.575 "enable_zerocopy_send_server": true, 00:12:09.575 "enable_zerocopy_send_client": false, 00:12:09.575 "zerocopy_threshold": 0, 00:12:09.575 "tls_version": 0, 00:12:09.575 "enable_ktls": false 00:12:09.575 } 00:12:09.575 } 00:12:09.575 ] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "vmd", 00:12:09.575 "config": [] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "accel", 00:12:09.575 "config": [ 00:12:09.575 { 00:12:09.575 "method": "accel_set_options", 00:12:09.575 "params": { 00:12:09.575 "small_cache_size": 128, 00:12:09.575 "large_cache_size": 16, 00:12:09.575 "task_count": 2048, 00:12:09.575 "sequence_count": 2048, 00:12:09.575 "buf_count": 2048 00:12:09.575 } 00:12:09.575 } 00:12:09.575 ] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "bdev", 00:12:09.575 "config": [ 00:12:09.575 { 00:12:09.575 "method": "bdev_set_options", 00:12:09.575 "params": { 00:12:09.575 "bdev_io_pool_size": 65535, 00:12:09.575 "bdev_io_cache_size": 256, 00:12:09.575 "bdev_auto_examine": true, 00:12:09.575 "iobuf_small_cache_size": 128, 00:12:09.575 "iobuf_large_cache_size": 16 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_raid_set_options", 00:12:09.575 "params": { 00:12:09.575 "process_window_size_kb": 1024 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_iscsi_set_options", 00:12:09.575 "params": { 00:12:09.575 "timeout_sec": 30 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_nvme_set_options", 00:12:09.575 "params": { 00:12:09.575 "action_on_timeout": "none", 00:12:09.575 "timeout_us": 0, 00:12:09.575 "timeout_admin_us": 0, 00:12:09.575 "keep_alive_timeout_ms": 10000, 00:12:09.575 "arbitration_burst": 0, 00:12:09.575 "low_priority_weight": 0, 00:12:09.575 "medium_priority_weight": 0, 00:12:09.575 "high_priority_weight": 0, 00:12:09.575 "nvme_adminq_poll_period_us": 10000, 00:12:09.575 "nvme_ioq_poll_period_us": 0, 00:12:09.575 "io_queue_requests": 0, 00:12:09.575 "delay_cmd_submit": true, 00:12:09.575 "transport_retry_count": 4, 00:12:09.575 "bdev_retry_count": 3, 00:12:09.575 "transport_ack_timeout": 0, 00:12:09.575 "ctrlr_loss_timeout_sec": 0, 00:12:09.575 "reconnect_delay_sec": 0, 00:12:09.575 "fast_io_fail_timeout_sec": 0, 00:12:09.575 "disable_auto_failback": false, 00:12:09.575 "generate_uuids": false, 00:12:09.575 "transport_tos": 0, 00:12:09.575 "nvme_error_stat": false, 00:12:09.575 "rdma_srq_size": 0, 00:12:09.575 "io_path_stat": false, 00:12:09.575 "allow_accel_sequence": false, 00:12:09.575 "rdma_max_cq_size": 0, 00:12:09.575 "rdma_cm_event_timeout_ms": 0, 00:12:09.575 "dhchap_digests": [ 00:12:09.575 "sha256", 00:12:09.575 "sha384", 00:12:09.575 "sha512" 00:12:09.575 ], 00:12:09.575 "dhchap_dhgroups": [ 00:12:09.575 "null", 00:12:09.575 "ffdhe2048", 00:12:09.575 "ffdhe3072", 00:12:09.575 "ffdhe4096", 00:12:09.575 "ffdhe6144", 00:12:09.575 "ffdhe8192" 00:12:09.575 ] 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_nvme_set_hotplug", 00:12:09.575 "params": { 00:12:09.575 "period_us": 100000, 00:12:09.575 "enable": false 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_malloc_create", 00:12:09.575 "params": { 00:12:09.575 "name": "malloc0", 00:12:09.575 "num_blocks": 8192, 00:12:09.575 "block_size": 4096, 00:12:09.575 "physical_block_size": 4096, 00:12:09.575 "uuid": "5c3f5556-9cab-42d9-8737-892e8dedc204", 00:12:09.575 "optimal_io_boundary": 0 00:12:09.575 } 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "method": "bdev_wait_for_examine" 00:12:09.575 } 00:12:09.575 ] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "nbd", 00:12:09.575 "config": [] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "scheduler", 00:12:09.575 "config": [ 00:12:09.575 { 00:12:09.575 "method": "framework_set_scheduler", 00:12:09.575 "params": { 00:12:09.575 "name": "static" 00:12:09.575 } 00:12:09.575 } 00:12:09.575 ] 00:12:09.575 }, 00:12:09.575 { 00:12:09.575 "subsystem": "nvmf", 00:12:09.576 "config": [ 00:12:09.576 { 00:12:09.576 "method": "nvmf_set_config", 00:12:09.576 "params": { 00:12:09.576 "discovery_filter": "match_any", 00:12:09.576 "admin_cmd_passthru": { 00:12:09.576 "identify_ctrlr": false 00:12:09.576 } 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_set_max_subsystems", 00:12:09.576 "params": { 00:12:09.576 "max_subsystems": 1024 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_set_crdt", 00:12:09.576 "params": { 00:12:09.576 "crdt1": 0, 00:12:09.576 "crdt2": 0, 00:12:09.576 "crdt3": 0 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_create_transport", 00:12:09.576 "params": { 00:12:09.576 "trtype": "TCP", 00:12:09.576 "max_queue_depth": 128, 00:12:09.576 "max_io_qpairs_per_ctrlr": 127, 00:12:09.576 "in_capsule_data_size": 4096, 00:12:09.576 "max_io_size": 131072, 00:12:09.576 "io_unit_size": 131072, 00:12:09.576 "max_aq_depth": 128, 00:12:09.576 "num_shared_buffers": 511, 00:12:09.576 "buf_cache_size": 4294967295, 00:12:09.576 "dif_insert_or_strip": false, 00:12:09.576 "zcopy": false, 00:12:09.576 "c2h_success": false, 00:12:09.576 "sock_priority": 0, 00:12:09.576 "abort_timeout_sec": 1, 00:12:09.576 "ack_timeout": 0, 00:12:09.576 "data_wr_pool_size": 0 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_create_subsystem", 00:12:09.576 "params": { 00:12:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.576 "allow_any_host": false, 00:12:09.576 "serial_number": "SPDK00000000000001", 00:12:09.576 "model_number": "SPDK bdev Controller", 00:12:09.576 "max_namespaces": 10, 00:12:09.576 "min_cntlid": 1, 00:12:09.576 "max_cntlid": 65519, 00:12:09.576 "ana_reporting": false 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_subsystem_add_host", 00:12:09.576 "params": { 00:12:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.576 "host": "nqn.2016-06.io.spdk:host1", 00:12:09.576 "psk": "/tmp/tmp.WyEpbZx3z7" 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_subsystem_add_ns", 00:12:09.576 "params": { 00:12:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.576 "namespace": { 00:12:09.576 "nsid": 1, 00:12:09.576 "bdev_name": "malloc0", 00:12:09.576 "nguid": "5C3F55569CAB42D98737892E8DEDC204", 00:12:09.576 "uuid": "5c3f5556-9cab-42d9-8737-892e8dedc204", 00:12:09.576 "no_auto_visible": false 00:12:09.576 } 00:12:09.576 } 00:12:09.576 }, 00:12:09.576 { 00:12:09.576 "method": "nvmf_subsystem_add_listener", 00:12:09.576 "params": { 00:12:09.576 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.576 "listen_address": { 00:12:09.576 "trtype": "TCP", 00:12:09.576 "adrfam": "IPv4", 00:12:09.576 "traddr": "10.0.0.2", 00:12:09.576 "trsvcid": "4420" 00:12:09.576 }, 00:12:09.576 "secure_channel": true 00:12:09.576 } 00:12:09.576 } 00:12:09.576 ] 00:12:09.576 } 00:12:09.576 ] 00:12:09.576 }' 00:12:09.576 02:56:48 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:10.143 02:56:48 -- target/tls.sh@197 -- # bdevperfconf='{ 00:12:10.143 "subsystems": [ 00:12:10.143 { 00:12:10.143 "subsystem": "keyring", 00:12:10.143 "config": [] 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "subsystem": "iobuf", 00:12:10.143 "config": [ 00:12:10.143 { 00:12:10.143 "method": "iobuf_set_options", 00:12:10.143 "params": { 00:12:10.143 "small_pool_count": 8192, 00:12:10.143 "large_pool_count": 1024, 00:12:10.143 "small_bufsize": 8192, 00:12:10.143 "large_bufsize": 135168 00:12:10.143 } 00:12:10.143 } 00:12:10.143 ] 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "subsystem": "sock", 00:12:10.143 "config": [ 00:12:10.143 { 00:12:10.143 "method": "sock_impl_set_options", 00:12:10.143 "params": { 00:12:10.143 "impl_name": "uring", 00:12:10.143 "recv_buf_size": 2097152, 00:12:10.143 "send_buf_size": 2097152, 00:12:10.143 "enable_recv_pipe": true, 00:12:10.143 "enable_quickack": false, 00:12:10.143 "enable_placement_id": 0, 00:12:10.143 "enable_zerocopy_send_server": false, 00:12:10.143 "enable_zerocopy_send_client": false, 00:12:10.143 "zerocopy_threshold": 0, 00:12:10.143 "tls_version": 0, 00:12:10.143 "enable_ktls": false 00:12:10.143 } 00:12:10.143 }, 00:12:10.143 { 00:12:10.143 "method": "sock_impl_set_options", 00:12:10.143 "params": { 00:12:10.143 "impl_name": "posix", 00:12:10.143 "recv_buf_size": 2097152, 00:12:10.143 "send_buf_size": 2097152, 00:12:10.144 "enable_recv_pipe": true, 00:12:10.144 "enable_quickack": false, 00:12:10.144 "enable_placement_id": 0, 00:12:10.144 "enable_zerocopy_send_server": true, 00:12:10.144 "enable_zerocopy_send_client": false, 00:12:10.144 "zerocopy_threshold": 0, 00:12:10.144 "tls_version": 0, 00:12:10.144 "enable_ktls": false 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "sock_impl_set_options", 00:12:10.144 "params": { 00:12:10.144 "impl_name": "ssl", 00:12:10.144 "recv_buf_size": 4096, 00:12:10.144 "send_buf_size": 4096, 00:12:10.144 "enable_recv_pipe": true, 00:12:10.144 "enable_quickack": false, 00:12:10.144 "enable_placement_id": 0, 00:12:10.144 "enable_zerocopy_send_server": true, 00:12:10.144 "enable_zerocopy_send_client": false, 00:12:10.144 "zerocopy_threshold": 0, 00:12:10.144 "tls_version": 0, 00:12:10.144 "enable_ktls": false 00:12:10.144 } 00:12:10.144 } 00:12:10.144 ] 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "subsystem": "vmd", 00:12:10.144 "config": [] 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "subsystem": "accel", 00:12:10.144 "config": [ 00:12:10.144 { 00:12:10.144 "method": "accel_set_options", 00:12:10.144 "params": { 00:12:10.144 "small_cache_size": 128, 00:12:10.144 "large_cache_size": 16, 00:12:10.144 "task_count": 2048, 00:12:10.144 "sequence_count": 2048, 00:12:10.144 "buf_count": 2048 00:12:10.144 } 00:12:10.144 } 00:12:10.144 ] 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "subsystem": "bdev", 00:12:10.144 "config": [ 00:12:10.144 { 00:12:10.144 "method": "bdev_set_options", 00:12:10.144 "params": { 00:12:10.144 "bdev_io_pool_size": 65535, 00:12:10.144 "bdev_io_cache_size": 256, 00:12:10.144 "bdev_auto_examine": true, 00:12:10.144 "iobuf_small_cache_size": 128, 00:12:10.144 "iobuf_large_cache_size": 16 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_raid_set_options", 00:12:10.144 "params": { 00:12:10.144 "process_window_size_kb": 1024 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_iscsi_set_options", 00:12:10.144 "params": { 00:12:10.144 "timeout_sec": 30 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_nvme_set_options", 00:12:10.144 "params": { 00:12:10.144 "action_on_timeout": "none", 00:12:10.144 "timeout_us": 0, 00:12:10.144 "timeout_admin_us": 0, 00:12:10.144 "keep_alive_timeout_ms": 10000, 00:12:10.144 "arbitration_burst": 0, 00:12:10.144 "low_priority_weight": 0, 00:12:10.144 "medium_priority_weight": 0, 00:12:10.144 "high_priority_weight": 0, 00:12:10.144 "nvme_adminq_poll_period_us": 10000, 00:12:10.144 "nvme_ioq_poll_period_us": 0, 00:12:10.144 "io_queue_requests": 512, 00:12:10.144 "delay_cmd_submit": true, 00:12:10.144 "transport_retry_count": 4, 00:12:10.144 "bdev_retry_count": 3, 00:12:10.144 "transport_ack_timeout": 0, 00:12:10.144 "ctrlr_loss_timeout_sec": 0, 00:12:10.144 "reconnect_delay_sec": 0, 00:12:10.144 "fast_io_fail_timeout_sec": 0, 00:12:10.144 "disable_auto_failback": false, 00:12:10.144 "generate_uuids": false, 00:12:10.144 "transport_tos": 0, 00:12:10.144 "nvme_error_stat": false, 00:12:10.144 "rdma_srq_size": 0, 00:12:10.144 "io_path_stat": false, 00:12:10.144 "allow_accel_sequence": false, 00:12:10.144 "rdma_max_cq_size": 0, 00:12:10.144 "rdma_cm_event_timeout_ms": 0, 00:12:10.144 "dhchap_digests": [ 00:12:10.144 "sha256", 00:12:10.144 "sha384", 00:12:10.144 "sha512" 00:12:10.144 ], 00:12:10.144 "dhchap_dhgroups": [ 00:12:10.144 "null", 00:12:10.144 "ffdhe2048", 00:12:10.144 "ffdhe3072", 00:12:10.144 "ffdhe4096", 00:12:10.144 "ffdhe6144", 00:12:10.144 "ffdhe8192" 00:12:10.144 ] 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_nvme_attach_controller", 00:12:10.144 "params": { 00:12:10.144 "name": "TLSTEST", 00:12:10.144 "trtype": "TCP", 00:12:10.144 "adrfam": "IPv4", 00:12:10.144 "traddr": "10.0.0.2", 00:12:10.144 "trsvcid": "4420", 00:12:10.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.144 "prchk_reftag": false, 00:12:10.144 "prchk_guard": false, 00:12:10.144 "ctrlr_loss_timeout_sec": 0, 00:12:10.144 "reconnect_delay_sec": 0, 00:12:10.144 "fast_io_fail_timeout_sec": 0, 00:12:10.144 "psk": "/tmp/tmp.WyEpbZx3z7", 00:12:10.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.144 "hdgst": false, 00:12:10.144 "ddgst": false 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_nvme_set_hotplug", 00:12:10.144 "params": { 00:12:10.144 "period_us": 100000, 00:12:10.144 "enable": false 00:12:10.144 } 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "method": "bdev_wait_for_examine" 00:12:10.144 } 00:12:10.144 ] 00:12:10.144 }, 00:12:10.144 { 00:12:10.144 "subsystem": "nbd", 00:12:10.144 "config": [] 00:12:10.144 } 00:12:10.144 ] 00:12:10.144 }' 00:12:10.144 02:56:48 -- target/tls.sh@199 -- # killprocess 83741 00:12:10.144 02:56:48 -- common/autotest_common.sh@936 -- # '[' -z 83741 ']' 00:12:10.144 02:56:48 -- common/autotest_common.sh@940 -- # kill -0 83741 00:12:10.144 02:56:48 -- common/autotest_common.sh@941 -- # uname 00:12:10.144 02:56:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.144 02:56:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83741 00:12:10.144 killing process with pid 83741 00:12:10.144 Received shutdown signal, test time was about 10.000000 seconds 00:12:10.144 00:12:10.144 Latency(us) 00:12:10.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:10.144 =================================================================================================================== 00:12:10.144 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:10.144 02:56:49 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:10.144 02:56:49 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:10.144 02:56:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83741' 00:12:10.144 02:56:49 -- common/autotest_common.sh@955 -- # kill 83741 00:12:10.144 [2024-04-23 02:56:49.026809] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:10.144 02:56:49 -- common/autotest_common.sh@960 -- # wait 83741 00:12:10.144 02:56:49 -- target/tls.sh@200 -- # killprocess 83705 00:12:10.144 02:56:49 -- common/autotest_common.sh@936 -- # '[' -z 83705 ']' 00:12:10.144 02:56:49 -- common/autotest_common.sh@940 -- # kill -0 83705 00:12:10.144 02:56:49 -- common/autotest_common.sh@941 -- # uname 00:12:10.144 02:56:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:10.144 02:56:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83705 00:12:10.144 killing process with pid 83705 00:12:10.144 02:56:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:10.144 02:56:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:10.144 02:56:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83705' 00:12:10.144 02:56:49 -- common/autotest_common.sh@955 -- # kill 83705 00:12:10.144 [2024-04-23 02:56:49.190473] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:10.144 02:56:49 -- common/autotest_common.sh@960 -- # wait 83705 00:12:10.404 02:56:49 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:10.404 02:56:49 -- target/tls.sh@203 -- # echo '{ 00:12:10.404 "subsystems": [ 00:12:10.404 { 00:12:10.404 "subsystem": "keyring", 00:12:10.404 "config": [] 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "subsystem": "iobuf", 00:12:10.404 "config": [ 00:12:10.404 { 00:12:10.404 "method": "iobuf_set_options", 00:12:10.404 "params": { 00:12:10.404 "small_pool_count": 8192, 00:12:10.404 "large_pool_count": 1024, 00:12:10.404 "small_bufsize": 8192, 00:12:10.404 "large_bufsize": 135168 00:12:10.404 } 00:12:10.404 } 00:12:10.404 ] 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "subsystem": "sock", 00:12:10.404 "config": [ 00:12:10.404 { 00:12:10.404 "method": "sock_impl_set_options", 00:12:10.404 "params": { 00:12:10.404 "impl_name": "uring", 00:12:10.404 "recv_buf_size": 2097152, 00:12:10.404 "send_buf_size": 2097152, 00:12:10.404 "enable_recv_pipe": true, 00:12:10.404 "enable_quickack": false, 00:12:10.404 "enable_placement_id": 0, 00:12:10.404 "enable_zerocopy_send_server": false, 00:12:10.404 "enable_zerocopy_send_client": false, 00:12:10.404 "zerocopy_threshold": 0, 00:12:10.404 "tls_version": 0, 00:12:10.404 "enable_ktls": false 00:12:10.404 } 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "method": "sock_impl_set_options", 00:12:10.404 "params": { 00:12:10.404 "impl_name": "posix", 00:12:10.404 "recv_buf_size": 2097152, 00:12:10.404 "send_buf_size": 2097152, 00:12:10.404 "enable_recv_pipe": true, 00:12:10.404 "enable_quickack": false, 00:12:10.404 "enable_placement_id": 0, 00:12:10.404 "enable_zerocopy_send_server": true, 00:12:10.404 "enable_zerocopy_send_client": false, 00:12:10.404 "zerocopy_threshold": 0, 00:12:10.404 "tls_version": 0, 00:12:10.404 "enable_ktls": false 00:12:10.404 } 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "method": "sock_impl_set_options", 00:12:10.404 "params": { 00:12:10.404 "impl_name": "ssl", 00:12:10.404 "recv_buf_size": 4096, 00:12:10.404 "send_buf_size": 4096, 00:12:10.404 "enable_recv_pipe": true, 00:12:10.404 "enable_quickack": false, 00:12:10.404 "enable_placement_id": 0, 00:12:10.404 "enable_zerocopy_send_server": true, 00:12:10.404 "enable_zerocopy_send_client": false, 00:12:10.404 "zerocopy_threshold": 0, 00:12:10.404 "tls_version": 0, 00:12:10.404 "enable_ktls": false 00:12:10.404 } 00:12:10.404 } 00:12:10.404 ] 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "subsystem": "vmd", 00:12:10.404 "config": [] 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "subsystem": "accel", 00:12:10.404 "config": [ 00:12:10.404 { 00:12:10.404 "method": "accel_set_options", 00:12:10.404 "params": { 00:12:10.404 "small_cache_size": 128, 00:12:10.404 "large_cache_size": 16, 00:12:10.404 "task_count": 2048, 00:12:10.404 "sequence_count": 2048, 00:12:10.404 "buf_count": 2048 00:12:10.404 } 00:12:10.404 } 00:12:10.404 ] 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "subsystem": "bdev", 00:12:10.404 "config": [ 00:12:10.404 { 00:12:10.404 "method": "bdev_set_options", 00:12:10.404 "params": { 00:12:10.404 "bdev_io_pool_size": 65535, 00:12:10.404 "bdev_io_cache_size": 256, 00:12:10.404 "bdev_auto_examine": true, 00:12:10.404 "iobuf_small_cache_size": 128, 00:12:10.404 "iobuf_large_cache_size": 16 00:12:10.404 } 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "method": "bdev_raid_set_options", 00:12:10.404 "params": { 00:12:10.404 "process_window_size_kb": 1024 00:12:10.404 } 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "method": "bdev_iscsi_set_options", 00:12:10.404 "params": { 00:12:10.404 "timeout_sec": 30 00:12:10.404 } 00:12:10.404 }, 00:12:10.404 { 00:12:10.404 "method": "bdev_nvme_set_options", 00:12:10.404 "params": { 00:12:10.404 "action_on_timeout": "none", 00:12:10.404 "timeout_us": 0, 00:12:10.404 "timeout_admin_us": 0, 00:12:10.404 "keep_alive_timeout_ms": 10000, 00:12:10.404 "arbitration_burst": 0, 00:12:10.404 "low_priority_weight": 0, 00:12:10.404 "medium_priority_weight": 0, 00:12:10.404 "high_priority_weight": 0, 00:12:10.404 "nvme_adminq_poll_period_us": 10000, 00:12:10.404 "nvme_ioq_poll_period_us": 0, 00:12:10.404 "io_queue_requests": 0, 00:12:10.404 "delay_cmd_submit": true, 00:12:10.404 "transport_retry_count": 4, 00:12:10.404 "bdev_retry_count": 3, 00:12:10.404 "transport_ack_timeout": 0, 00:12:10.404 "ctrlr_loss_timeout_sec": 0, 00:12:10.404 "reconnect_delay_sec": 0, 00:12:10.404 "fast_io_fail_timeout_sec": 0, 00:12:10.404 "disable_auto_failback": false, 00:12:10.404 "generate_uuids": false, 00:12:10.404 "transport_tos": 0, 00:12:10.404 "nvme_error_stat": false, 00:12:10.404 "rdma_srq_size": 0, 00:12:10.404 "io_path_stat": false, 00:12:10.404 "allow_accel_sequence": false, 00:12:10.404 "rdma_max_cq_size": 0, 00:12:10.404 "rdma_cm_event_timeout_ms": 0, 00:12:10.404 "dhchap_digests": [ 00:12:10.404 "sha256", 00:12:10.404 "sha384", 00:12:10.404 "sha512" 00:12:10.404 ], 00:12:10.404 "dhchap_dhgroups": [ 00:12:10.404 "null", 00:12:10.405 "ffdhe2048", 00:12:10.405 "ffdhe3072", 00:12:10.405 "ffdhe4096", 00:12:10.405 "ffdhe6144", 00:12:10.405 "ffdhe8192" 00:12:10.405 ] 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "bdev_nvme_set_hotplug", 00:12:10.405 "params": { 00:12:10.405 "period_us": 100000, 00:12:10.405 "enable": false 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "bdev_malloc_create", 00:12:10.405 "params": { 00:12:10.405 "name": "malloc0", 00:12:10.405 "num_blocks": 8192, 00:12:10.405 "block_size": 4096, 00:12:10.405 "physical_block_size": 4096, 00:12:10.405 "uuid": "5c3f5556-9cab-42d9-8737-892e8dedc204", 00:12:10.405 "optimal_io_boundary": 0 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "bdev_wait_for_examine" 00:12:10.405 } 00:12:10.405 ] 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "subsystem": "nbd", 00:12:10.405 "config": [] 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "subsystem": "scheduler", 00:12:10.405 "config": [ 00:12:10.405 { 00:12:10.405 "method": "framework_set_scheduler", 00:12:10.405 "params": { 00:12:10.405 "name": "static" 00:12:10.405 } 00:12:10.405 } 00:12:10.405 ] 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "subsystem": "nvmf", 00:12:10.405 "config": [ 00:12:10.405 { 00:12:10.405 "method": "nvmf_set_config", 00:12:10.405 "params": { 00:12:10.405 "discovery_filter": "match_any", 00:12:10.405 "admin_cmd_passthru": { 00:12:10.405 "identify_ctrlr": false 00:12:10.405 } 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_set_max_subsystems", 00:12:10.405 "params": { 00:12:10.405 "max_subsystems": 1024 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_set_crdt", 00:12:10.405 "params": { 00:12:10.405 "crdt1": 0, 00:12:10.405 "crdt2": 0, 00:12:10.405 "crdt3": 0 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_create_transport", 00:12:10.405 "params": { 00:12:10.405 "trtype": "TCP", 00:12:10.405 "max_queue_depth": 128, 00:12:10.405 "max_io_qpairs_per_ctrlr": 127, 00:12:10.405 "in_capsule_data_size": 4096, 00:12:10.405 "max_io_size": 131072, 00:12:10.405 "io_unit_size": 131072, 00:12:10.405 "max_aq_depth": 128, 00:12:10.405 "num_shared_buffers": 511, 00:12:10.405 "buf_cache_size": 4294967295, 00:12:10.405 "dif_insert_or_strip": false, 00:12:10.405 "zcopy": false, 00:12:10.405 "c2h_success": false, 00:12:10.405 "sock_priority": 0, 00:12:10.405 "abort_timeout_sec": 1, 00:12:10.405 "ack_timeout": 0, 00:12:10.405 "data_wr_pool_size": 0 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_create_subsystem", 00:12:10.405 "params": { 00:12:10.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.405 "allow_any_host": false, 00:12:10.405 "serial_number": "SPDK00000000000001", 00:12:10.405 "model_number": "SPDK bdev Controller", 00:12:10.405 "max_namespaces": 10, 00:12:10.405 "min_cntlid": 1, 00:12:10.405 "max_cntlid": 65519, 00:12:10.405 "ana_reporting": false 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_subsystem_add_host", 00:12:10.405 "params": { 00:12:10.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.405 "host": "nqn.2016-06.io.spdk:host1", 00:12:10.405 "psk": "/tmp/tmp.WyEpbZx3z7" 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_subsystem_add_ns", 00:12:10.405 "params": { 00:12:10.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.405 "namespace": { 00:12:10.405 "nsid": 1, 00:12:10.405 "bdev_name": "malloc0", 00:12:10.405 "nguid": "5C3F55569CAB42D98737892E8DEDC204", 00:12:10.405 "uuid": "5c3f5556-9cab-42d9-8737-892e8dedc204", 00:12:10.405 "no_auto_visible": false 00:12:10.405 } 00:12:10.405 } 00:12:10.405 }, 00:12:10.405 { 00:12:10.405 "method": "nvmf_subsystem_add_listener", 00:12:10.405 "params": { 00:12:10.405 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.405 "listen_address": { 00:12:10.405 "trtype": "TCP", 00:12:10.405 "adrfam": "IPv4", 00:12:10.405 "traddr": "10.0.0.2", 00:12:10.405 "trsvcid": "4420" 00:12:10.405 }, 00:12:10.405 "secure_channel": true 00:12:10.405 } 00:12:10.405 } 00:12:10.405 ] 00:12:10.405 } 00:12:10.405 ] 00:12:10.405 }' 00:12:10.405 02:56:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:10.405 02:56:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:10.405 02:56:49 -- common/autotest_common.sh@10 -- # set +x 00:12:10.405 02:56:49 -- nvmf/common.sh@470 -- # nvmfpid=83782 00:12:10.405 02:56:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:10.405 02:56:49 -- nvmf/common.sh@471 -- # waitforlisten 83782 00:12:10.405 02:56:49 -- common/autotest_common.sh@817 -- # '[' -z 83782 ']' 00:12:10.405 02:56:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:10.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:10.405 02:56:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:10.405 02:56:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:10.405 02:56:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:10.405 02:56:49 -- common/autotest_common.sh@10 -- # set +x 00:12:10.405 [2024-04-23 02:56:49.393759] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:10.405 [2024-04-23 02:56:49.393852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:10.405 [2024-04-23 02:56:49.515808] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:10.405 [2024-04-23 02:56:49.533970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.665 [2024-04-23 02:56:49.565817] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.665 [2024-04-23 02:56:49.565870] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.665 [2024-04-23 02:56:49.565896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:10.665 [2024-04-23 02:56:49.565903] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:10.665 [2024-04-23 02:56:49.565910] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.665 [2024-04-23 02:56:49.565982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.665 [2024-04-23 02:56:49.742547] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.665 [2024-04-23 02:56:49.758468] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:10.665 [2024-04-23 02:56:49.774474] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:10.665 [2024-04-23 02:56:49.774674] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.232 02:56:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:11.232 02:56:50 -- common/autotest_common.sh@850 -- # return 0 00:12:11.232 02:56:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:11.232 02:56:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:11.232 02:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:11.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:11.232 02:56:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:11.232 02:56:50 -- target/tls.sh@207 -- # bdevperf_pid=83809 00:12:11.232 02:56:50 -- target/tls.sh@208 -- # waitforlisten 83809 /var/tmp/bdevperf.sock 00:12:11.232 02:56:50 -- common/autotest_common.sh@817 -- # '[' -z 83809 ']' 00:12:11.232 02:56:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:11.232 02:56:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:11.232 02:56:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:11.232 02:56:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:11.232 02:56:50 -- common/autotest_common.sh@10 -- # set +x 00:12:11.232 02:56:50 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:11.232 02:56:50 -- target/tls.sh@204 -- # echo '{ 00:12:11.232 "subsystems": [ 00:12:11.232 { 00:12:11.232 "subsystem": "keyring", 00:12:11.232 "config": [] 00:12:11.232 }, 00:12:11.232 { 00:12:11.232 "subsystem": "iobuf", 00:12:11.232 "config": [ 00:12:11.232 { 00:12:11.232 "method": "iobuf_set_options", 00:12:11.232 "params": { 00:12:11.232 "small_pool_count": 8192, 00:12:11.232 "large_pool_count": 1024, 00:12:11.232 "small_bufsize": 8192, 00:12:11.232 "large_bufsize": 135168 00:12:11.232 } 00:12:11.233 } 00:12:11.233 ] 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "subsystem": "sock", 00:12:11.233 "config": [ 00:12:11.233 { 00:12:11.233 "method": "sock_impl_set_options", 00:12:11.233 "params": { 00:12:11.233 "impl_name": "uring", 00:12:11.233 "recv_buf_size": 2097152, 00:12:11.233 "send_buf_size": 2097152, 00:12:11.233 "enable_recv_pipe": true, 00:12:11.233 "enable_quickack": false, 00:12:11.233 "enable_placement_id": 0, 00:12:11.233 "enable_zerocopy_send_server": false, 00:12:11.233 "enable_zerocopy_send_client": false, 00:12:11.233 "zerocopy_threshold": 0, 00:12:11.233 "tls_version": 0, 00:12:11.233 "enable_ktls": false 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "sock_impl_set_options", 00:12:11.233 "params": { 00:12:11.233 "impl_name": "posix", 00:12:11.233 "recv_buf_size": 2097152, 00:12:11.233 "send_buf_size": 2097152, 00:12:11.233 "enable_recv_pipe": true, 00:12:11.233 "enable_quickack": false, 00:12:11.233 "enable_placement_id": 0, 00:12:11.233 "enable_zerocopy_send_server": true, 00:12:11.233 "enable_zerocopy_send_client": false, 00:12:11.233 "zerocopy_threshold": 0, 00:12:11.233 "tls_version": 0, 00:12:11.233 "enable_ktls": false 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "sock_impl_set_options", 00:12:11.233 "params": { 00:12:11.233 "impl_name": "ssl", 00:12:11.233 "recv_buf_size": 4096, 00:12:11.233 "send_buf_size": 4096, 00:12:11.233 "enable_recv_pipe": true, 00:12:11.233 "enable_quickack": false, 00:12:11.233 "enable_placement_id": 0, 00:12:11.233 "enable_zerocopy_send_server": true, 00:12:11.233 "enable_zerocopy_send_client": false, 00:12:11.233 "zerocopy_threshold": 0, 00:12:11.233 "tls_version": 0, 00:12:11.233 "enable_ktls": false 00:12:11.233 } 00:12:11.233 } 00:12:11.233 ] 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "subsystem": "vmd", 00:12:11.233 "config": [] 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "subsystem": "accel", 00:12:11.233 "config": [ 00:12:11.233 { 00:12:11.233 "method": "accel_set_options", 00:12:11.233 "params": { 00:12:11.233 "small_cache_size": 128, 00:12:11.233 "large_cache_size": 16, 00:12:11.233 "task_count": 2048, 00:12:11.233 "sequence_count": 2048, 00:12:11.233 "buf_count": 2048 00:12:11.233 } 00:12:11.233 } 00:12:11.233 ] 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "subsystem": "bdev", 00:12:11.233 "config": [ 00:12:11.233 { 00:12:11.233 "method": "bdev_set_options", 00:12:11.233 "params": { 00:12:11.233 "bdev_io_pool_size": 65535, 00:12:11.233 "bdev_io_cache_size": 256, 00:12:11.233 "bdev_auto_examine": true, 00:12:11.233 "iobuf_small_cache_size": 128, 00:12:11.233 "iobuf_large_cache_size": 16 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_raid_set_options", 00:12:11.233 "params": { 00:12:11.233 "process_window_size_kb": 1024 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_iscsi_set_options", 00:12:11.233 "params": { 00:12:11.233 "timeout_sec": 30 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_nvme_set_options", 00:12:11.233 "params": { 00:12:11.233 "action_on_timeout": "none", 00:12:11.233 "timeout_us": 0, 00:12:11.233 "timeout_admin_us": 0, 00:12:11.233 "keep_alive_timeout_ms": 10000, 00:12:11.233 "arbitration_burst": 0, 00:12:11.233 "low_priority_weight": 0, 00:12:11.233 "medium_priority_weight": 0, 00:12:11.233 "high_priority_weight": 0, 00:12:11.233 "nvme_adminq_poll_period_us": 10000, 00:12:11.233 "nvme_ioq_poll_period_us": 0, 00:12:11.233 "io_queue_requests": 512, 00:12:11.233 "delay_cmd_submit": true, 00:12:11.233 "transport_retry_count": 4, 00:12:11.233 "bdev_retry_count": 3, 00:12:11.233 "transport_ack_timeout": 0, 00:12:11.233 "ctrlr_loss_timeout_sec": 0, 00:12:11.233 "reconnect_delay_sec": 0, 00:12:11.233 "fast_io_fail_timeout_sec": 0, 00:12:11.233 "disable_auto_failback": false, 00:12:11.233 "generate_uuids": false, 00:12:11.233 "transport_tos": 0, 00:12:11.233 "nvme_error_stat": false, 00:12:11.233 "rdma_srq_size": 0, 00:12:11.233 "io_path_stat": false, 00:12:11.233 "allow_accel_sequence": false, 00:12:11.233 "rdma_max_cq_size": 0, 00:12:11.233 "rdma_cm_event_timeout_ms": 0, 00:12:11.233 "dhchap_digests": [ 00:12:11.233 "sha256", 00:12:11.233 "sha384", 00:12:11.233 "sha512" 00:12:11.233 ], 00:12:11.233 "dhchap_dhgroups": [ 00:12:11.233 "null", 00:12:11.233 "ffdhe2048", 00:12:11.233 "ffdhe3072", 00:12:11.233 "ffdhe4096", 00:12:11.233 "ffdhe6144", 00:12:11.233 "ffdhe8192" 00:12:11.233 ] 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_nvme_attach_controller", 00:12:11.233 "params": { 00:12:11.233 "name": "TLSTEST", 00:12:11.233 "trtype": "TCP", 00:12:11.233 "adrfam": "IPv4", 00:12:11.233 "traddr": "10.0.0.2", 00:12:11.233 "trsvcid": "4420", 00:12:11.233 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:11.233 "prchk_reftag": false, 00:12:11.233 "prchk_guard": false, 00:12:11.233 "ctrlr_loss_timeout_sec": 0, 00:12:11.233 "reconnect_delay_sec": 0, 00:12:11.233 "fast_io_fail_timeout_sec": 0, 00:12:11.233 "psk": "/tmp/tmp.WyEpbZx3z7", 00:12:11.233 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:11.233 "hdgst": false, 00:12:11.233 "ddgst": false 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_nvme_set_hotplug", 00:12:11.233 "params": { 00:12:11.233 "period_us": 100000, 00:12:11.233 "enable": false 00:12:11.233 } 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "method": "bdev_wait_for_examine" 00:12:11.233 } 00:12:11.233 ] 00:12:11.233 }, 00:12:11.233 { 00:12:11.233 "subsystem": "nbd", 00:12:11.233 "config": [] 00:12:11.233 } 00:12:11.233 ] 00:12:11.233 }' 00:12:11.233 [2024-04-23 02:56:50.362672] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:11.233 [2024-04-23 02:56:50.362771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83809 ] 00:12:11.492 [2024-04-23 02:56:50.484780] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:11.492 [2024-04-23 02:56:50.506302] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.492 [2024-04-23 02:56:50.547593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.750 [2024-04-23 02:56:50.676971] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:11.750 [2024-04-23 02:56:50.677845] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:12.317 02:56:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:12.317 02:56:51 -- common/autotest_common.sh@850 -- # return 0 00:12:12.317 02:56:51 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:12.317 Running I/O for 10 seconds... 00:12:22.296 00:12:22.296 Latency(us) 00:12:22.296 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.296 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:22.296 Verification LBA range: start 0x0 length 0x2000 00:12:22.296 TLSTESTn1 : 10.02 4385.21 17.13 0.00 0.00 29136.69 5898.24 26691.03 00:12:22.296 =================================================================================================================== 00:12:22.296 Total : 4385.21 17.13 0.00 0.00 29136.69 5898.24 26691.03 00:12:22.296 0 00:12:22.296 02:57:01 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:22.296 02:57:01 -- target/tls.sh@214 -- # killprocess 83809 00:12:22.296 02:57:01 -- common/autotest_common.sh@936 -- # '[' -z 83809 ']' 00:12:22.296 02:57:01 -- common/autotest_common.sh@940 -- # kill -0 83809 00:12:22.296 02:57:01 -- common/autotest_common.sh@941 -- # uname 00:12:22.296 02:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.296 02:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83809 00:12:22.555 killing process with pid 83809 00:12:22.555 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.555 00:12:22.555 Latency(us) 00:12:22.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.555 =================================================================================================================== 00:12:22.555 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.555 02:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:22.555 02:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:22.555 02:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83809' 00:12:22.555 02:57:01 -- common/autotest_common.sh@955 -- # kill 83809 00:12:22.555 [2024-04-23 02:57:01.456092] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:22.555 02:57:01 -- common/autotest_common.sh@960 -- # wait 83809 00:12:22.555 02:57:01 -- target/tls.sh@215 -- # killprocess 83782 00:12:22.555 02:57:01 -- common/autotest_common.sh@936 -- # '[' -z 83782 ']' 00:12:22.555 02:57:01 -- common/autotest_common.sh@940 -- # kill -0 83782 00:12:22.555 02:57:01 -- common/autotest_common.sh@941 -- # uname 00:12:22.555 02:57:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:22.555 02:57:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83782 00:12:22.555 killing process with pid 83782 00:12:22.555 02:57:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:22.555 02:57:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:22.555 02:57:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83782' 00:12:22.555 02:57:01 -- common/autotest_common.sh@955 -- # kill 83782 00:12:22.555 [2024-04-23 02:57:01.620754] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:22.555 02:57:01 -- common/autotest_common.sh@960 -- # wait 83782 00:12:22.814 02:57:01 -- target/tls.sh@218 -- # nvmfappstart 00:12:22.814 02:57:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:22.814 02:57:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:22.814 02:57:01 -- common/autotest_common.sh@10 -- # set +x 00:12:22.814 02:57:01 -- nvmf/common.sh@470 -- # nvmfpid=83947 00:12:22.814 02:57:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:22.814 02:57:01 -- nvmf/common.sh@471 -- # waitforlisten 83947 00:12:22.814 02:57:01 -- common/autotest_common.sh@817 -- # '[' -z 83947 ']' 00:12:22.814 02:57:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.814 02:57:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:22.814 02:57:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.814 02:57:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:22.814 02:57:01 -- common/autotest_common.sh@10 -- # set +x 00:12:22.814 [2024-04-23 02:57:01.820979] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:22.814 [2024-04-23 02:57:01.821068] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.814 [2024-04-23 02:57:01.942985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:22.814 [2024-04-23 02:57:01.962741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.073 [2024-04-23 02:57:02.001117] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.073 [2024-04-23 02:57:02.001213] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.073 [2024-04-23 02:57:02.001230] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.073 [2024-04-23 02:57:02.001241] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.073 [2024-04-23 02:57:02.001259] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.073 [2024-04-23 02:57:02.001292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.073 02:57:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:23.073 02:57:02 -- common/autotest_common.sh@850 -- # return 0 00:12:23.073 02:57:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:23.073 02:57:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:23.073 02:57:02 -- common/autotest_common.sh@10 -- # set +x 00:12:23.073 02:57:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.073 02:57:02 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.WyEpbZx3z7 00:12:23.073 02:57:02 -- target/tls.sh@49 -- # local key=/tmp/tmp.WyEpbZx3z7 00:12:23.073 02:57:02 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:23.332 [2024-04-23 02:57:02.345490] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.332 02:57:02 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:23.591 02:57:02 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:23.850 [2024-04-23 02:57:02.861629] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:23.850 [2024-04-23 02:57:02.861839] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.850 02:57:02 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:24.109 malloc0 00:12:24.109 02:57:03 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:24.367 02:57:03 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.WyEpbZx3z7 00:12:24.627 [2024-04-23 02:57:03.543843] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:24.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.627 02:57:03 -- target/tls.sh@222 -- # bdevperf_pid=83990 00:12:24.627 02:57:03 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:24.627 02:57:03 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:24.627 02:57:03 -- target/tls.sh@225 -- # waitforlisten 83990 /var/tmp/bdevperf.sock 00:12:24.627 02:57:03 -- common/autotest_common.sh@817 -- # '[' -z 83990 ']' 00:12:24.627 02:57:03 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.627 02:57:03 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:24.627 02:57:03 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.627 02:57:03 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:24.627 02:57:03 -- common/autotest_common.sh@10 -- # set +x 00:12:24.627 [2024-04-23 02:57:03.613895] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:24.627 [2024-04-23 02:57:03.614422] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83990 ] 00:12:24.627 [2024-04-23 02:57:03.738484] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:24.627 [2024-04-23 02:57:03.756696] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.885 [2024-04-23 02:57:03.790736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.885 02:57:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:24.885 02:57:03 -- common/autotest_common.sh@850 -- # return 0 00:12:24.885 02:57:03 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WyEpbZx3z7 00:12:25.145 02:57:04 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:25.405 [2024-04-23 02:57:04.328156] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:25.405 nvme0n1 00:12:25.405 02:57:04 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:25.405 Running I/O for 1 seconds... 00:12:26.782 00:12:26.782 Latency(us) 00:12:26.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.782 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:26.782 Verification LBA range: start 0x0 length 0x2000 00:12:26.782 nvme0n1 : 1.03 4195.21 16.39 0.00 0.00 30087.96 6255.71 19184.17 00:12:26.782 =================================================================================================================== 00:12:26.782 Total : 4195.21 16.39 0.00 0.00 30087.96 6255.71 19184.17 00:12:26.782 0 00:12:26.782 02:57:05 -- target/tls.sh@234 -- # killprocess 83990 00:12:26.782 02:57:05 -- common/autotest_common.sh@936 -- # '[' -z 83990 ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@940 -- # kill -0 83990 00:12:26.782 02:57:05 -- common/autotest_common.sh@941 -- # uname 00:12:26.782 02:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83990 00:12:26.782 killing process with pid 83990 00:12:26.782 Received shutdown signal, test time was about 1.000000 seconds 00:12:26.782 00:12:26.782 Latency(us) 00:12:26.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:26.782 =================================================================================================================== 00:12:26.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:26.782 02:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:26.782 02:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83990' 00:12:26.782 02:57:05 -- common/autotest_common.sh@955 -- # kill 83990 00:12:26.782 02:57:05 -- common/autotest_common.sh@960 -- # wait 83990 00:12:26.782 02:57:05 -- target/tls.sh@235 -- # killprocess 83947 00:12:26.782 02:57:05 -- common/autotest_common.sh@936 -- # '[' -z 83947 ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@940 -- # kill -0 83947 00:12:26.782 02:57:05 -- common/autotest_common.sh@941 -- # uname 00:12:26.782 02:57:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83947 00:12:26.782 killing process with pid 83947 00:12:26.782 02:57:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:26.782 02:57:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83947' 00:12:26.782 02:57:05 -- common/autotest_common.sh@955 -- # kill 83947 00:12:26.782 [2024-04-23 02:57:05.754949] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:26.782 02:57:05 -- common/autotest_common.sh@960 -- # wait 83947 00:12:26.782 02:57:05 -- target/tls.sh@238 -- # nvmfappstart 00:12:26.782 02:57:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:26.782 02:57:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:26.782 02:57:05 -- common/autotest_common.sh@10 -- # set +x 00:12:26.782 02:57:05 -- nvmf/common.sh@470 -- # nvmfpid=84032 00:12:26.782 02:57:05 -- nvmf/common.sh@471 -- # waitforlisten 84032 00:12:26.782 02:57:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:26.782 02:57:05 -- common/autotest_common.sh@817 -- # '[' -z 84032 ']' 00:12:26.782 02:57:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.782 02:57:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:26.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.782 02:57:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.782 02:57:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:26.782 02:57:05 -- common/autotest_common.sh@10 -- # set +x 00:12:27.041 [2024-04-23 02:57:05.964078] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:27.041 [2024-04-23 02:57:05.964192] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.041 [2024-04-23 02:57:06.086670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:27.041 [2024-04-23 02:57:06.103675] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.041 [2024-04-23 02:57:06.133780] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.041 [2024-04-23 02:57:06.133836] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.041 [2024-04-23 02:57:06.133862] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.041 [2024-04-23 02:57:06.133869] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.041 [2024-04-23 02:57:06.133876] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.041 [2024-04-23 02:57:06.133916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.300 02:57:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:27.300 02:57:06 -- common/autotest_common.sh@850 -- # return 0 00:12:27.300 02:57:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:27.300 02:57:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:27.300 02:57:06 -- common/autotest_common.sh@10 -- # set +x 00:12:27.300 02:57:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.300 02:57:06 -- target/tls.sh@239 -- # rpc_cmd 00:12:27.300 02:57:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:27.300 02:57:06 -- common/autotest_common.sh@10 -- # set +x 00:12:27.300 [2024-04-23 02:57:06.246398] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.300 malloc0 00:12:27.300 [2024-04-23 02:57:06.272597] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:27.300 [2024-04-23 02:57:06.272783] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.300 02:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:27.300 02:57:06 -- target/tls.sh@252 -- # bdevperf_pid=84051 00:12:27.300 02:57:06 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:12:27.300 02:57:06 -- target/tls.sh@254 -- # waitforlisten 84051 /var/tmp/bdevperf.sock 00:12:27.300 02:57:06 -- common/autotest_common.sh@817 -- # '[' -z 84051 ']' 00:12:27.300 02:57:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:27.300 02:57:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:27.300 02:57:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:27.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:27.300 02:57:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:27.300 02:57:06 -- common/autotest_common.sh@10 -- # set +x 00:12:27.300 [2024-04-23 02:57:06.357528] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:27.301 [2024-04-23 02:57:06.357841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84051 ] 00:12:27.559 [2024-04-23 02:57:06.482643] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:27.559 [2024-04-23 02:57:06.494953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.559 [2024-04-23 02:57:06.527446] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.495 02:57:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:28.495 02:57:07 -- common/autotest_common.sh@850 -- # return 0 00:12:28.495 02:57:07 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.WyEpbZx3z7 00:12:28.495 02:57:07 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:12:28.754 [2024-04-23 02:57:07.729544] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:28.754 nvme0n1 00:12:28.754 02:57:07 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:28.754 Running I/O for 1 seconds... 00:12:30.132 00:12:30.133 Latency(us) 00:12:30.133 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.133 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:30.133 Verification LBA range: start 0x0 length 0x2000 00:12:30.133 nvme0n1 : 1.02 4327.72 16.91 0.00 0.00 29200.33 6911.07 23354.65 00:12:30.133 =================================================================================================================== 00:12:30.133 Total : 4327.72 16.91 0.00 0.00 29200.33 6911.07 23354.65 00:12:30.133 0 00:12:30.133 02:57:08 -- target/tls.sh@263 -- # rpc_cmd save_config 00:12:30.133 02:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:30.133 02:57:08 -- common/autotest_common.sh@10 -- # set +x 00:12:30.133 02:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:30.133 02:57:09 -- target/tls.sh@263 -- # tgtcfg='{ 00:12:30.133 "subsystems": [ 00:12:30.133 { 00:12:30.133 "subsystem": "keyring", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "keyring_file_add_key", 00:12:30.133 "params": { 00:12:30.133 "name": "key0", 00:12:30.133 "path": "/tmp/tmp.WyEpbZx3z7" 00:12:30.133 } 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "iobuf", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "iobuf_set_options", 00:12:30.133 "params": { 00:12:30.133 "small_pool_count": 8192, 00:12:30.133 "large_pool_count": 1024, 00:12:30.133 "small_bufsize": 8192, 00:12:30.133 "large_bufsize": 135168 00:12:30.133 } 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "sock", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "sock_impl_set_options", 00:12:30.133 "params": { 00:12:30.133 "impl_name": "uring", 00:12:30.133 "recv_buf_size": 2097152, 00:12:30.133 "send_buf_size": 2097152, 00:12:30.133 "enable_recv_pipe": true, 00:12:30.133 "enable_quickack": false, 00:12:30.133 "enable_placement_id": 0, 00:12:30.133 "enable_zerocopy_send_server": false, 00:12:30.133 "enable_zerocopy_send_client": false, 00:12:30.133 "zerocopy_threshold": 0, 00:12:30.133 "tls_version": 0, 00:12:30.133 "enable_ktls": false 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "sock_impl_set_options", 00:12:30.133 "params": { 00:12:30.133 "impl_name": "posix", 00:12:30.133 "recv_buf_size": 2097152, 00:12:30.133 "send_buf_size": 2097152, 00:12:30.133 "enable_recv_pipe": true, 00:12:30.133 "enable_quickack": false, 00:12:30.133 "enable_placement_id": 0, 00:12:30.133 "enable_zerocopy_send_server": true, 00:12:30.133 "enable_zerocopy_send_client": false, 00:12:30.133 "zerocopy_threshold": 0, 00:12:30.133 "tls_version": 0, 00:12:30.133 "enable_ktls": false 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "sock_impl_set_options", 00:12:30.133 "params": { 00:12:30.133 "impl_name": "ssl", 00:12:30.133 "recv_buf_size": 4096, 00:12:30.133 "send_buf_size": 4096, 00:12:30.133 "enable_recv_pipe": true, 00:12:30.133 "enable_quickack": false, 00:12:30.133 "enable_placement_id": 0, 00:12:30.133 "enable_zerocopy_send_server": true, 00:12:30.133 "enable_zerocopy_send_client": false, 00:12:30.133 "zerocopy_threshold": 0, 00:12:30.133 "tls_version": 0, 00:12:30.133 "enable_ktls": false 00:12:30.133 } 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "vmd", 00:12:30.133 "config": [] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "accel", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "accel_set_options", 00:12:30.133 "params": { 00:12:30.133 "small_cache_size": 128, 00:12:30.133 "large_cache_size": 16, 00:12:30.133 "task_count": 2048, 00:12:30.133 "sequence_count": 2048, 00:12:30.133 "buf_count": 2048 00:12:30.133 } 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "bdev", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "bdev_set_options", 00:12:30.133 "params": { 00:12:30.133 "bdev_io_pool_size": 65535, 00:12:30.133 "bdev_io_cache_size": 256, 00:12:30.133 "bdev_auto_examine": true, 00:12:30.133 "iobuf_small_cache_size": 128, 00:12:30.133 "iobuf_large_cache_size": 16 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_raid_set_options", 00:12:30.133 "params": { 00:12:30.133 "process_window_size_kb": 1024 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_iscsi_set_options", 00:12:30.133 "params": { 00:12:30.133 "timeout_sec": 30 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_nvme_set_options", 00:12:30.133 "params": { 00:12:30.133 "action_on_timeout": "none", 00:12:30.133 "timeout_us": 0, 00:12:30.133 "timeout_admin_us": 0, 00:12:30.133 "keep_alive_timeout_ms": 10000, 00:12:30.133 "arbitration_burst": 0, 00:12:30.133 "low_priority_weight": 0, 00:12:30.133 "medium_priority_weight": 0, 00:12:30.133 "high_priority_weight": 0, 00:12:30.133 "nvme_adminq_poll_period_us": 10000, 00:12:30.133 "nvme_ioq_poll_period_us": 0, 00:12:30.133 "io_queue_requests": 0, 00:12:30.133 "delay_cmd_submit": true, 00:12:30.133 "transport_retry_count": 4, 00:12:30.133 "bdev_retry_count": 3, 00:12:30.133 "transport_ack_timeout": 0, 00:12:30.133 "ctrlr_loss_timeout_sec": 0, 00:12:30.133 "reconnect_delay_sec": 0, 00:12:30.133 "fast_io_fail_timeout_sec": 0, 00:12:30.133 "disable_auto_failback": false, 00:12:30.133 "generate_uuids": false, 00:12:30.133 "transport_tos": 0, 00:12:30.133 "nvme_error_stat": false, 00:12:30.133 "rdma_srq_size": 0, 00:12:30.133 "io_path_stat": false, 00:12:30.133 "allow_accel_sequence": false, 00:12:30.133 "rdma_max_cq_size": 0, 00:12:30.133 "rdma_cm_event_timeout_ms": 0, 00:12:30.133 "dhchap_digests": [ 00:12:30.133 "sha256", 00:12:30.133 "sha384", 00:12:30.133 "sha512" 00:12:30.133 ], 00:12:30.133 "dhchap_dhgroups": [ 00:12:30.133 "null", 00:12:30.133 "ffdhe2048", 00:12:30.133 "ffdhe3072", 00:12:30.133 "ffdhe4096", 00:12:30.133 "ffdhe6144", 00:12:30.133 "ffdhe8192" 00:12:30.133 ] 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_nvme_set_hotplug", 00:12:30.133 "params": { 00:12:30.133 "period_us": 100000, 00:12:30.133 "enable": false 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_malloc_create", 00:12:30.133 "params": { 00:12:30.133 "name": "malloc0", 00:12:30.133 "num_blocks": 8192, 00:12:30.133 "block_size": 4096, 00:12:30.133 "physical_block_size": 4096, 00:12:30.133 "uuid": "a2a8f634-ef91-45f4-9ae2-40b65cb4b730", 00:12:30.133 "optimal_io_boundary": 0 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "bdev_wait_for_examine" 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "nbd", 00:12:30.133 "config": [] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "scheduler", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "framework_set_scheduler", 00:12:30.133 "params": { 00:12:30.133 "name": "static" 00:12:30.133 } 00:12:30.133 } 00:12:30.133 ] 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "subsystem": "nvmf", 00:12:30.133 "config": [ 00:12:30.133 { 00:12:30.133 "method": "nvmf_set_config", 00:12:30.133 "params": { 00:12:30.133 "discovery_filter": "match_any", 00:12:30.133 "admin_cmd_passthru": { 00:12:30.133 "identify_ctrlr": false 00:12:30.133 } 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "nvmf_set_max_subsystems", 00:12:30.133 "params": { 00:12:30.133 "max_subsystems": 1024 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "nvmf_set_crdt", 00:12:30.133 "params": { 00:12:30.133 "crdt1": 0, 00:12:30.133 "crdt2": 0, 00:12:30.133 "crdt3": 0 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "nvmf_create_transport", 00:12:30.133 "params": { 00:12:30.133 "trtype": "TCP", 00:12:30.133 "max_queue_depth": 128, 00:12:30.133 "max_io_qpairs_per_ctrlr": 127, 00:12:30.133 "in_capsule_data_size": 4096, 00:12:30.133 "max_io_size": 131072, 00:12:30.133 "io_unit_size": 131072, 00:12:30.133 "max_aq_depth": 128, 00:12:30.133 "num_shared_buffers": 511, 00:12:30.133 "buf_cache_size": 4294967295, 00:12:30.133 "dif_insert_or_strip": false, 00:12:30.133 "zcopy": false, 00:12:30.133 "c2h_success": false, 00:12:30.133 "sock_priority": 0, 00:12:30.133 "abort_timeout_sec": 1, 00:12:30.133 "ack_timeout": 0, 00:12:30.133 "data_wr_pool_size": 0 00:12:30.133 } 00:12:30.133 }, 00:12:30.133 { 00:12:30.133 "method": "nvmf_create_subsystem", 00:12:30.133 "params": { 00:12:30.133 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.134 "allow_any_host": false, 00:12:30.134 "serial_number": "00000000000000000000", 00:12:30.134 "model_number": "SPDK bdev Controller", 00:12:30.134 "max_namespaces": 32, 00:12:30.134 "min_cntlid": 1, 00:12:30.134 "max_cntlid": 65519, 00:12:30.134 "ana_reporting": false 00:12:30.134 } 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "method": "nvmf_subsystem_add_host", 00:12:30.134 "params": { 00:12:30.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.134 "host": "nqn.2016-06.io.spdk:host1", 00:12:30.134 "psk": "key0" 00:12:30.134 } 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "method": "nvmf_subsystem_add_ns", 00:12:30.134 "params": { 00:12:30.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.134 "namespace": { 00:12:30.134 "nsid": 1, 00:12:30.134 "bdev_name": "malloc0", 00:12:30.134 "nguid": "A2A8F634EF9145F49AE240B65CB4B730", 00:12:30.134 "uuid": "a2a8f634-ef91-45f4-9ae2-40b65cb4b730", 00:12:30.134 "no_auto_visible": false 00:12:30.134 } 00:12:30.134 } 00:12:30.134 }, 00:12:30.134 { 00:12:30.134 "method": "nvmf_subsystem_add_listener", 00:12:30.134 "params": { 00:12:30.134 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.134 "listen_address": { 00:12:30.134 "trtype": "TCP", 00:12:30.134 "adrfam": "IPv4", 00:12:30.134 "traddr": "10.0.0.2", 00:12:30.134 "trsvcid": "4420" 00:12:30.134 }, 00:12:30.134 "secure_channel": true 00:12:30.134 } 00:12:30.134 } 00:12:30.134 ] 00:12:30.134 } 00:12:30.134 ] 00:12:30.134 }' 00:12:30.134 02:57:09 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:30.395 02:57:09 -- target/tls.sh@264 -- # bperfcfg='{ 00:12:30.395 "subsystems": [ 00:12:30.395 { 00:12:30.395 "subsystem": "keyring", 00:12:30.395 "config": [ 00:12:30.395 { 00:12:30.395 "method": "keyring_file_add_key", 00:12:30.395 "params": { 00:12:30.395 "name": "key0", 00:12:30.395 "path": "/tmp/tmp.WyEpbZx3z7" 00:12:30.395 } 00:12:30.395 } 00:12:30.395 ] 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "subsystem": "iobuf", 00:12:30.395 "config": [ 00:12:30.395 { 00:12:30.395 "method": "iobuf_set_options", 00:12:30.395 "params": { 00:12:30.395 "small_pool_count": 8192, 00:12:30.395 "large_pool_count": 1024, 00:12:30.395 "small_bufsize": 8192, 00:12:30.395 "large_bufsize": 135168 00:12:30.395 } 00:12:30.395 } 00:12:30.395 ] 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "subsystem": "sock", 00:12:30.395 "config": [ 00:12:30.395 { 00:12:30.395 "method": "sock_impl_set_options", 00:12:30.395 "params": { 00:12:30.395 "impl_name": "uring", 00:12:30.395 "recv_buf_size": 2097152, 00:12:30.395 "send_buf_size": 2097152, 00:12:30.395 "enable_recv_pipe": true, 00:12:30.395 "enable_quickack": false, 00:12:30.395 "enable_placement_id": 0, 00:12:30.395 "enable_zerocopy_send_server": false, 00:12:30.395 "enable_zerocopy_send_client": false, 00:12:30.395 "zerocopy_threshold": 0, 00:12:30.395 "tls_version": 0, 00:12:30.395 "enable_ktls": false 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "sock_impl_set_options", 00:12:30.395 "params": { 00:12:30.395 "impl_name": "posix", 00:12:30.395 "recv_buf_size": 2097152, 00:12:30.395 "send_buf_size": 2097152, 00:12:30.395 "enable_recv_pipe": true, 00:12:30.395 "enable_quickack": false, 00:12:30.395 "enable_placement_id": 0, 00:12:30.395 "enable_zerocopy_send_server": true, 00:12:30.395 "enable_zerocopy_send_client": false, 00:12:30.395 "zerocopy_threshold": 0, 00:12:30.395 "tls_version": 0, 00:12:30.395 "enable_ktls": false 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "sock_impl_set_options", 00:12:30.395 "params": { 00:12:30.395 "impl_name": "ssl", 00:12:30.395 "recv_buf_size": 4096, 00:12:30.395 "send_buf_size": 4096, 00:12:30.395 "enable_recv_pipe": true, 00:12:30.395 "enable_quickack": false, 00:12:30.395 "enable_placement_id": 0, 00:12:30.395 "enable_zerocopy_send_server": true, 00:12:30.395 "enable_zerocopy_send_client": false, 00:12:30.395 "zerocopy_threshold": 0, 00:12:30.395 "tls_version": 0, 00:12:30.395 "enable_ktls": false 00:12:30.395 } 00:12:30.395 } 00:12:30.395 ] 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "subsystem": "vmd", 00:12:30.395 "config": [] 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "subsystem": "accel", 00:12:30.395 "config": [ 00:12:30.395 { 00:12:30.395 "method": "accel_set_options", 00:12:30.395 "params": { 00:12:30.395 "small_cache_size": 128, 00:12:30.395 "large_cache_size": 16, 00:12:30.395 "task_count": 2048, 00:12:30.395 "sequence_count": 2048, 00:12:30.395 "buf_count": 2048 00:12:30.395 } 00:12:30.395 } 00:12:30.395 ] 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "subsystem": "bdev", 00:12:30.395 "config": [ 00:12:30.395 { 00:12:30.395 "method": "bdev_set_options", 00:12:30.395 "params": { 00:12:30.395 "bdev_io_pool_size": 65535, 00:12:30.395 "bdev_io_cache_size": 256, 00:12:30.395 "bdev_auto_examine": true, 00:12:30.395 "iobuf_small_cache_size": 128, 00:12:30.395 "iobuf_large_cache_size": 16 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_raid_set_options", 00:12:30.395 "params": { 00:12:30.395 "process_window_size_kb": 1024 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_iscsi_set_options", 00:12:30.395 "params": { 00:12:30.395 "timeout_sec": 30 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_nvme_set_options", 00:12:30.395 "params": { 00:12:30.395 "action_on_timeout": "none", 00:12:30.395 "timeout_us": 0, 00:12:30.395 "timeout_admin_us": 0, 00:12:30.395 "keep_alive_timeout_ms": 10000, 00:12:30.395 "arbitration_burst": 0, 00:12:30.395 "low_priority_weight": 0, 00:12:30.395 "medium_priority_weight": 0, 00:12:30.395 "high_priority_weight": 0, 00:12:30.395 "nvme_adminq_poll_period_us": 10000, 00:12:30.395 "nvme_ioq_poll_period_us": 0, 00:12:30.395 "io_queue_requests": 512, 00:12:30.395 "delay_cmd_submit": true, 00:12:30.395 "transport_retry_count": 4, 00:12:30.395 "bdev_retry_count": 3, 00:12:30.395 "transport_ack_timeout": 0, 00:12:30.395 "ctrlr_loss_timeout_sec": 0, 00:12:30.395 "reconnect_delay_sec": 0, 00:12:30.395 "fast_io_fail_timeout_sec": 0, 00:12:30.395 "disable_auto_failback": false, 00:12:30.395 "generate_uuids": false, 00:12:30.395 "transport_tos": 0, 00:12:30.395 "nvme_error_stat": false, 00:12:30.395 "rdma_srq_size": 0, 00:12:30.395 "io_path_stat": false, 00:12:30.395 "allow_accel_sequence": false, 00:12:30.395 "rdma_max_cq_size": 0, 00:12:30.395 "rdma_cm_event_timeout_ms": 0, 00:12:30.395 "dhchap_digests": [ 00:12:30.395 "sha256", 00:12:30.395 "sha384", 00:12:30.395 "sha512" 00:12:30.395 ], 00:12:30.395 "dhchap_dhgroups": [ 00:12:30.395 "null", 00:12:30.395 "ffdhe2048", 00:12:30.395 "ffdhe3072", 00:12:30.395 "ffdhe4096", 00:12:30.395 "ffdhe6144", 00:12:30.395 "ffdhe8192" 00:12:30.395 ] 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_nvme_attach_controller", 00:12:30.395 "params": { 00:12:30.395 "name": "nvme0", 00:12:30.395 "trtype": "TCP", 00:12:30.395 "adrfam": "IPv4", 00:12:30.395 "traddr": "10.0.0.2", 00:12:30.395 "trsvcid": "4420", 00:12:30.395 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.395 "prchk_reftag": false, 00:12:30.395 "prchk_guard": false, 00:12:30.395 "ctrlr_loss_timeout_sec": 0, 00:12:30.395 "reconnect_delay_sec": 0, 00:12:30.395 "fast_io_fail_timeout_sec": 0, 00:12:30.395 "psk": "key0", 00:12:30.395 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:30.395 "hdgst": false, 00:12:30.395 "ddgst": false 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_nvme_set_hotplug", 00:12:30.395 "params": { 00:12:30.395 "period_us": 100000, 00:12:30.395 "enable": false 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_enable_histogram", 00:12:30.395 "params": { 00:12:30.395 "name": "nvme0n1", 00:12:30.395 "enable": true 00:12:30.395 } 00:12:30.395 }, 00:12:30.395 { 00:12:30.395 "method": "bdev_wait_for_examine" 00:12:30.395 } 00:12:30.396 ] 00:12:30.396 }, 00:12:30.396 { 00:12:30.396 "subsystem": "nbd", 00:12:30.396 "config": [] 00:12:30.396 } 00:12:30.396 ] 00:12:30.396 }' 00:12:30.396 02:57:09 -- target/tls.sh@266 -- # killprocess 84051 00:12:30.396 02:57:09 -- common/autotest_common.sh@936 -- # '[' -z 84051 ']' 00:12:30.396 02:57:09 -- common/autotest_common.sh@940 -- # kill -0 84051 00:12:30.396 02:57:09 -- common/autotest_common.sh@941 -- # uname 00:12:30.396 02:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.396 02:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84051 00:12:30.396 killing process with pid 84051 00:12:30.396 Received shutdown signal, test time was about 1.000000 seconds 00:12:30.396 00:12:30.396 Latency(us) 00:12:30.396 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:30.396 =================================================================================================================== 00:12:30.396 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:30.396 02:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:30.396 02:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:30.396 02:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84051' 00:12:30.396 02:57:09 -- common/autotest_common.sh@955 -- # kill 84051 00:12:30.396 02:57:09 -- common/autotest_common.sh@960 -- # wait 84051 00:12:30.661 02:57:09 -- target/tls.sh@267 -- # killprocess 84032 00:12:30.661 02:57:09 -- common/autotest_common.sh@936 -- # '[' -z 84032 ']' 00:12:30.661 02:57:09 -- common/autotest_common.sh@940 -- # kill -0 84032 00:12:30.661 02:57:09 -- common/autotest_common.sh@941 -- # uname 00:12:30.661 02:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:30.661 02:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84032 00:12:30.661 killing process with pid 84032 00:12:30.661 02:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:30.661 02:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:30.661 02:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84032' 00:12:30.661 02:57:09 -- common/autotest_common.sh@955 -- # kill 84032 00:12:30.661 02:57:09 -- common/autotest_common.sh@960 -- # wait 84032 00:12:30.661 02:57:09 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:12:30.661 02:57:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:30.661 02:57:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:30.661 02:57:09 -- target/tls.sh@269 -- # echo '{ 00:12:30.661 "subsystems": [ 00:12:30.661 { 00:12:30.661 "subsystem": "keyring", 00:12:30.661 "config": [ 00:12:30.661 { 00:12:30.662 "method": "keyring_file_add_key", 00:12:30.662 "params": { 00:12:30.662 "name": "key0", 00:12:30.662 "path": "/tmp/tmp.WyEpbZx3z7" 00:12:30.662 } 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "iobuf", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "iobuf_set_options", 00:12:30.662 "params": { 00:12:30.662 "small_pool_count": 8192, 00:12:30.662 "large_pool_count": 1024, 00:12:30.662 "small_bufsize": 8192, 00:12:30.662 "large_bufsize": 135168 00:12:30.662 } 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "sock", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "sock_impl_set_options", 00:12:30.662 "params": { 00:12:30.662 "impl_name": "uring", 00:12:30.662 "recv_buf_size": 2097152, 00:12:30.662 "send_buf_size": 2097152, 00:12:30.662 "enable_recv_pipe": true, 00:12:30.662 "enable_quickack": false, 00:12:30.662 "enable_placement_id": 0, 00:12:30.662 "enable_zerocopy_send_server": false, 00:12:30.662 "enable_zerocopy_send_client": false, 00:12:30.662 "zerocopy_threshold": 0, 00:12:30.662 "tls_version": 0, 00:12:30.662 "enable_ktls": false 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "sock_impl_set_options", 00:12:30.662 "params": { 00:12:30.662 "impl_name": "posix", 00:12:30.662 "recv_buf_size": 2097152, 00:12:30.662 "send_buf_size": 2097152, 00:12:30.662 "enable_recv_pipe": true, 00:12:30.662 "enable_quickack": false, 00:12:30.662 "enable_placement_id": 0, 00:12:30.662 "enable_zerocopy_send_server": true, 00:12:30.662 "enable_zerocopy_send_client": false, 00:12:30.662 "zerocopy_threshold": 0, 00:12:30.662 "tls_version": 0, 00:12:30.662 "enable_ktls": false 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "sock_impl_set_options", 00:12:30.662 "params": { 00:12:30.662 "impl_name": "ssl", 00:12:30.662 "recv_buf_size": 4096, 00:12:30.662 "send_buf_size": 4096, 00:12:30.662 "enable_recv_pipe": true, 00:12:30.662 "enable_quickack": false, 00:12:30.662 "enable_placement_id": 0, 00:12:30.662 "enable_zerocopy_send_server": true, 00:12:30.662 "enable_zerocopy_send_client": false, 00:12:30.662 "zerocopy_threshold": 0, 00:12:30.662 "tls_version": 0, 00:12:30.662 "enable_ktls": false 00:12:30.662 } 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "vmd", 00:12:30.662 "config": [] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "accel", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "accel_set_options", 00:12:30.662 "params": { 00:12:30.662 "small_cache_size": 128, 00:12:30.662 "large_cache_size": 16, 00:12:30.662 "task_count": 2048, 00:12:30.662 "sequence_count": 2048, 00:12:30.662 "buf_count": 2048 00:12:30.662 } 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "bdev", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "bdev_set_options", 00:12:30.662 "params": { 00:12:30.662 "bdev_io_pool_size": 65535, 00:12:30.662 "bdev_io_cache_size": 256, 00:12:30.662 "bdev_auto_examine": true, 00:12:30.662 "iobuf_small_cache_size": 128, 00:12:30.662 "iobuf_large_cache_size": 16 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_raid_set_options", 00:12:30.662 "params": { 00:12:30.662 "process_window_size_kb": 1024 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_iscsi_set_options", 00:12:30.662 "params": { 00:12:30.662 "timeout_sec": 30 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_nvme_set_options", 00:12:30.662 "params": { 00:12:30.662 "action_on_timeout": "none", 00:12:30.662 "timeout_us": 0, 00:12:30.662 "timeout_admin_us": 0, 00:12:30.662 "keep_alive_timeout_ms": 10000, 00:12:30.662 "arbitration_burst": 0, 00:12:30.662 "low_priority_weight": 0, 00:12:30.662 "medium_priority_weight": 0, 00:12:30.662 "high_priority_weight": 0, 00:12:30.662 "nvme_adminq_poll_period_us": 10000, 00:12:30.662 "nvme_ioq_poll_period_us": 0, 00:12:30.662 "io_queue_requests": 0, 00:12:30.662 "delay_cmd_submit": true, 00:12:30.662 "transport_retry_count": 4, 00:12:30.662 "bdev_retry_count": 3, 00:12:30.662 "transport_ack_timeout": 0, 00:12:30.662 "ctrlr_loss_timeout_sec": 0, 00:12:30.662 "reconnect_delay_sec": 0, 00:12:30.662 "fast_io_fail_timeout_sec": 0, 00:12:30.662 "disable_auto_failback": false, 00:12:30.662 "generate_uuids": false, 00:12:30.662 "transport_tos": 0, 00:12:30.662 "nvme_error_stat": false, 00:12:30.662 "rdma_srq_size": 0, 00:12:30.662 "io_path_stat": false, 00:12:30.662 "allow_accel_sequence": false, 00:12:30.662 "rdma_max_cq_size": 0, 00:12:30.662 "rdma_cm_event_timeout_ms": 0, 00:12:30.662 "dhchap_digests": [ 00:12:30.662 "sha256", 00:12:30.662 "sha384", 00:12:30.662 "sha512" 00:12:30.662 ], 00:12:30.662 "dhchap_dhgroups": [ 00:12:30.662 "null", 00:12:30.662 "ffdhe2048", 00:12:30.662 "ffdhe3072", 00:12:30.662 "ffdhe4096", 00:12:30.662 "ffdhe6144", 00:12:30.662 "ffdhe8192" 00:12:30.662 ] 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_nvme_set_hotplug", 00:12:30.662 "params": { 00:12:30.662 "period_us": 100000, 00:12:30.662 "enable": false 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_malloc_create", 00:12:30.662 "params": { 00:12:30.662 "name": "malloc0", 00:12:30.662 "num_blocks": 8192, 00:12:30.662 "block_size": 4096, 00:12:30.662 "physical_block_size": 4096, 00:12:30.662 "uuid": "a2a8f634-ef91-45f4-9ae2-40b65cb4b730", 00:12:30.662 "optimal_io_boundary": 0 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "bdev_wait_for_examine" 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "nbd", 00:12:30.662 "config": [] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "scheduler", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "framework_set_scheduler", 00:12:30.662 "params": { 00:12:30.662 "name": "static" 00:12:30.662 } 00:12:30.662 } 00:12:30.662 ] 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "subsystem": "nvmf", 00:12:30.662 "config": [ 00:12:30.662 { 00:12:30.662 "method": "nvmf_set_config", 00:12:30.662 "params": { 00:12:30.662 "discovery_filter": "match_any", 00:12:30.662 "admin_cmd_passthru": { 00:12:30.662 "identify_ctrlr": false 00:12:30.662 } 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_set_max_subsystems", 00:12:30.662 "params": { 00:12:30.662 "max_subsystems": 1024 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_set_crdt", 00:12:30.662 "params": { 00:12:30.662 "crdt1": 0, 00:12:30.662 "crdt2": 0, 00:12:30.662 "crdt3": 0 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_create_transport", 00:12:30.662 "params": { 00:12:30.662 "trtype": "TCP", 00:12:30.662 "max_queue_depth": 128, 00:12:30.662 "max_io_qpairs_per_ctrlr": 127, 00:12:30.662 "in_capsule_data_size": 4096, 00:12:30.662 "max_io_size": 131072, 00:12:30.662 "io_unit_size": 131072, 00:12:30.662 "max_aq_depth": 128, 00:12:30.662 "num_shared_buffers": 511, 00:12:30.662 "buf_cache_size": 4294967295, 00:12:30.662 "dif_insert_or_strip": false, 00:12:30.662 "zcopy": false, 00:12:30.662 "c2h_success": false, 00:12:30.662 "sock_priority": 0, 00:12:30.662 "abort_timeout_sec": 1, 00:12:30.662 "ack_timeout": 0, 00:12:30.662 "data_wr_pool_size": 0 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_create_subsystem", 00:12:30.662 "params": { 00:12:30.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.662 "allow_any_host": false, 00:12:30.662 "serial_number": "00000000000000000000", 00:12:30.662 "model_number": "SPDK bdev Controller", 00:12:30.662 "max_namespaces": 32, 00:12:30.662 "min_cntlid": 1, 00:12:30.662 "max_cntlid": 65519, 00:12:30.662 "ana_reporting": false 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_subsystem_add_host", 00:12:30.662 "params": { 00:12:30.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.662 "host": "nqn.2016-06.io.spdk:host1", 00:12:30.662 "psk": "key0" 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_subsystem_add_ns", 00:12:30.662 "params": { 00:12:30.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.662 "namespace": { 00:12:30.662 "nsid": 1, 00:12:30.662 "bdev_name": "malloc0", 00:12:30.662 "nguid": "A2A8F634EF9145F49AE240B65CB4B730", 00:12:30.662 "uuid": "a2a8f634-ef91-45f4-9ae2-40b65cb4b730", 00:12:30.662 "no_auto_visible": false 00:12:30.662 } 00:12:30.662 } 00:12:30.662 }, 00:12:30.662 { 00:12:30.662 "method": "nvmf_subsystem_add_listener", 00:12:30.662 "params": { 00:12:30.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.662 "listen_address": { 00:12:30.662 "trtype": "TCP", 00:12:30.662 "adrfam": "IPv4", 00:12:30.662 "traddr": "10.0.0.2", 00:12:30.662 "trsvcid": "4420" 00:12:30.662 }, 00:12:30.662 "secure_channel": true 00:12:30.662 } 00:12:30.663 } 00:12:30.663 ] 00:12:30.663 } 00:12:30.663 ] 00:12:30.663 }' 00:12:30.663 02:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.663 02:57:09 -- nvmf/common.sh@470 -- # nvmfpid=84112 00:12:30.663 02:57:09 -- nvmf/common.sh@471 -- # waitforlisten 84112 00:12:30.663 02:57:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:12:30.663 02:57:09 -- common/autotest_common.sh@817 -- # '[' -z 84112 ']' 00:12:30.663 02:57:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.663 02:57:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:30.663 02:57:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.663 02:57:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:30.663 02:57:09 -- common/autotest_common.sh@10 -- # set +x 00:12:30.663 [2024-04-23 02:57:09.807189] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:30.663 [2024-04-23 02:57:09.807274] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.922 [2024-04-23 02:57:09.928670] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:30.922 [2024-04-23 02:57:09.947042] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:30.922 [2024-04-23 02:57:09.977210] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.922 [2024-04-23 02:57:09.977261] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.922 [2024-04-23 02:57:09.977286] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.922 [2024-04-23 02:57:09.977293] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.922 [2024-04-23 02:57:09.977300] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.922 [2024-04-23 02:57:09.977381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.181 [2024-04-23 02:57:10.159848] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.181 [2024-04-23 02:57:10.191793] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:31.181 [2024-04-23 02:57:10.191972] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.752 02:57:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:31.752 02:57:10 -- common/autotest_common.sh@850 -- # return 0 00:12:31.752 02:57:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:31.752 02:57:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:31.752 02:57:10 -- common/autotest_common.sh@10 -- # set +x 00:12:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:31.752 02:57:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.752 02:57:10 -- target/tls.sh@272 -- # bdevperf_pid=84144 00:12:31.752 02:57:10 -- target/tls.sh@273 -- # waitforlisten 84144 /var/tmp/bdevperf.sock 00:12:31.752 02:57:10 -- common/autotest_common.sh@817 -- # '[' -z 84144 ']' 00:12:31.752 02:57:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:31.752 02:57:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:31.752 02:57:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:31.752 02:57:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:31.752 02:57:10 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:12:31.752 02:57:10 -- common/autotest_common.sh@10 -- # set +x 00:12:31.752 02:57:10 -- target/tls.sh@270 -- # echo '{ 00:12:31.752 "subsystems": [ 00:12:31.752 { 00:12:31.752 "subsystem": "keyring", 00:12:31.752 "config": [ 00:12:31.752 { 00:12:31.752 "method": "keyring_file_add_key", 00:12:31.752 "params": { 00:12:31.752 "name": "key0", 00:12:31.752 "path": "/tmp/tmp.WyEpbZx3z7" 00:12:31.752 } 00:12:31.752 } 00:12:31.752 ] 00:12:31.752 }, 00:12:31.752 { 00:12:31.752 "subsystem": "iobuf", 00:12:31.752 "config": [ 00:12:31.752 { 00:12:31.752 "method": "iobuf_set_options", 00:12:31.752 "params": { 00:12:31.752 "small_pool_count": 8192, 00:12:31.752 "large_pool_count": 1024, 00:12:31.752 "small_bufsize": 8192, 00:12:31.752 "large_bufsize": 135168 00:12:31.752 } 00:12:31.752 } 00:12:31.752 ] 00:12:31.752 }, 00:12:31.752 { 00:12:31.753 "subsystem": "sock", 00:12:31.753 "config": [ 00:12:31.753 { 00:12:31.753 "method": "sock_impl_set_options", 00:12:31.753 "params": { 00:12:31.753 "impl_name": "uring", 00:12:31.753 "recv_buf_size": 2097152, 00:12:31.753 "send_buf_size": 2097152, 00:12:31.753 "enable_recv_pipe": true, 00:12:31.753 "enable_quickack": false, 00:12:31.753 "enable_placement_id": 0, 00:12:31.753 "enable_zerocopy_send_server": false, 00:12:31.753 "enable_zerocopy_send_client": false, 00:12:31.753 "zerocopy_threshold": 0, 00:12:31.753 "tls_version": 0, 00:12:31.753 "enable_ktls": false 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "sock_impl_set_options", 00:12:31.753 "params": { 00:12:31.753 "impl_name": "posix", 00:12:31.753 "recv_buf_size": 2097152, 00:12:31.753 "send_buf_size": 2097152, 00:12:31.753 "enable_recv_pipe": true, 00:12:31.753 "enable_quickack": false, 00:12:31.753 "enable_placement_id": 0, 00:12:31.753 "enable_zerocopy_send_server": true, 00:12:31.753 "enable_zerocopy_send_client": false, 00:12:31.753 "zerocopy_threshold": 0, 00:12:31.753 "tls_version": 0, 00:12:31.753 "enable_ktls": false 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "sock_impl_set_options", 00:12:31.753 "params": { 00:12:31.753 "impl_name": "ssl", 00:12:31.753 "recv_buf_size": 4096, 00:12:31.753 "send_buf_size": 4096, 00:12:31.753 "enable_recv_pipe": true, 00:12:31.753 "enable_quickack": false, 00:12:31.753 "enable_placement_id": 0, 00:12:31.753 "enable_zerocopy_send_server": true, 00:12:31.753 "enable_zerocopy_send_client": false, 00:12:31.753 "zerocopy_threshold": 0, 00:12:31.753 "tls_version": 0, 00:12:31.753 "enable_ktls": false 00:12:31.753 } 00:12:31.753 } 00:12:31.753 ] 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "subsystem": "vmd", 00:12:31.753 "config": [] 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "subsystem": "accel", 00:12:31.753 "config": [ 00:12:31.753 { 00:12:31.753 "method": "accel_set_options", 00:12:31.753 "params": { 00:12:31.753 "small_cache_size": 128, 00:12:31.753 "large_cache_size": 16, 00:12:31.753 "task_count": 2048, 00:12:31.753 "sequence_count": 2048, 00:12:31.753 "buf_count": 2048 00:12:31.753 } 00:12:31.753 } 00:12:31.753 ] 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "subsystem": "bdev", 00:12:31.753 "config": [ 00:12:31.753 { 00:12:31.753 "method": "bdev_set_options", 00:12:31.753 "params": { 00:12:31.753 "bdev_io_pool_size": 65535, 00:12:31.753 "bdev_io_cache_size": 256, 00:12:31.753 "bdev_auto_examine": true, 00:12:31.753 "iobuf_small_cache_size": 128, 00:12:31.753 "iobuf_large_cache_size": 16 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_raid_set_options", 00:12:31.753 "params": { 00:12:31.753 "process_window_size_kb": 1024 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_iscsi_set_options", 00:12:31.753 "params": { 00:12:31.753 "timeout_sec": 30 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_nvme_set_options", 00:12:31.753 "params": { 00:12:31.753 "action_on_timeout": "none", 00:12:31.753 "timeout_us": 0, 00:12:31.753 "timeout_admin_us": 0, 00:12:31.753 "keep_alive_timeout_ms": 10000, 00:12:31.753 "arbitration_burst": 0, 00:12:31.753 "low_priority_weight": 0, 00:12:31.753 "medium_priority_weight": 0, 00:12:31.753 "high_priority_weight": 0, 00:12:31.753 "nvme_adminq_poll_period_us": 10000, 00:12:31.753 "nvme_ioq_poll_period_us": 0, 00:12:31.753 "io_queue_requests": 512, 00:12:31.753 "delay_cmd_submit": true, 00:12:31.753 "transport_retry_count": 4, 00:12:31.753 "bdev_retry_count": 3, 00:12:31.753 "transport_ack_timeout": 0, 00:12:31.753 "ctrlr_loss_timeout_sec": 0, 00:12:31.753 "reconnect_delay_sec": 0, 00:12:31.753 "fast_io_fail_timeout_sec": 0, 00:12:31.753 "disable_auto_failback": false, 00:12:31.753 "generate_uuids": false, 00:12:31.753 "transport_tos": 0, 00:12:31.753 "nvme_error_stat": false, 00:12:31.753 "rdma_srq_size": 0, 00:12:31.753 "io_path_stat": false, 00:12:31.753 "allow_accel_sequence": false, 00:12:31.753 "rdma_max_cq_size": 0, 00:12:31.753 "rdma_cm_event_timeout_ms": 0, 00:12:31.753 "dhchap_digests": [ 00:12:31.753 "sha256", 00:12:31.753 "sha384", 00:12:31.753 "sha512" 00:12:31.753 ], 00:12:31.753 "dhchap_dhgroups": [ 00:12:31.753 "null", 00:12:31.753 "ffdhe2048", 00:12:31.753 "ffdhe3072", 00:12:31.753 "ffdhe4096", 00:12:31.753 "ffdhe6144", 00:12:31.753 "ffdhe8192" 00:12:31.753 ] 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_nvme_attach_controller", 00:12:31.753 "params": { 00:12:31.753 "name": "nvme0", 00:12:31.753 "trtype": "TCP", 00:12:31.753 "adrfam": "IPv4", 00:12:31.753 "traddr": "10.0.0.2", 00:12:31.753 "trsvcid": "4420", 00:12:31.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.753 "prchk_reftag": false, 00:12:31.753 "prchk_guard": false, 00:12:31.753 "ctrlr_loss_timeout_sec": 0, 00:12:31.753 "reconnect_delay_sec": 0, 00:12:31.753 "fast_io_fail_timeout_sec": 0, 00:12:31.753 "psk": "key0", 00:12:31.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:31.753 "hdgst": false, 00:12:31.753 "ddgst": false 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_nvme_set_hotplug", 00:12:31.753 "params": { 00:12:31.753 "period_us": 100000, 00:12:31.753 "enable": false 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_enable_histogram", 00:12:31.753 "params": { 00:12:31.753 "name": "nvme0n1", 00:12:31.753 "enable": true 00:12:31.753 } 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "method": "bdev_wait_for_examine" 00:12:31.753 } 00:12:31.753 ] 00:12:31.753 }, 00:12:31.753 { 00:12:31.753 "subsystem": "nbd", 00:12:31.753 "config": [] 00:12:31.753 } 00:12:31.753 ] 00:12:31.753 }' 00:12:31.753 [2024-04-23 02:57:10.849876] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:31.753 [2024-04-23 02:57:10.849978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84144 ] 00:12:32.012 [2024-04-23 02:57:10.966810] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:32.012 [2024-04-23 02:57:10.987056] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.012 [2024-04-23 02:57:11.026146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.012 [2024-04-23 02:57:11.164410] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:32.949 02:57:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:32.949 02:57:11 -- common/autotest_common.sh@850 -- # return 0 00:12:32.949 02:57:11 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:12:32.949 02:57:11 -- target/tls.sh@275 -- # jq -r '.[].name' 00:12:32.949 02:57:12 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:32.949 02:57:12 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:33.207 Running I/O for 1 seconds... 00:12:34.144 00:12:34.144 Latency(us) 00:12:34.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.145 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:12:34.145 Verification LBA range: start 0x0 length 0x2000 00:12:34.145 nvme0n1 : 1.02 4264.32 16.66 0.00 0.00 29729.68 6106.76 25380.31 00:12:34.145 =================================================================================================================== 00:12:34.145 Total : 4264.32 16.66 0.00 0.00 29729.68 6106.76 25380.31 00:12:34.145 0 00:12:34.145 02:57:13 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:12:34.145 02:57:13 -- target/tls.sh@279 -- # cleanup 00:12:34.145 02:57:13 -- target/tls.sh@15 -- # process_shm --id 0 00:12:34.145 02:57:13 -- common/autotest_common.sh@794 -- # type=--id 00:12:34.145 02:57:13 -- common/autotest_common.sh@795 -- # id=0 00:12:34.145 02:57:13 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:34.145 02:57:13 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:34.145 02:57:13 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:34.145 02:57:13 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:34.145 02:57:13 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:34.145 02:57:13 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:34.145 nvmf_trace.0 00:12:34.145 02:57:13 -- common/autotest_common.sh@809 -- # return 0 00:12:34.145 02:57:13 -- target/tls.sh@16 -- # killprocess 84144 00:12:34.145 02:57:13 -- common/autotest_common.sh@936 -- # '[' -z 84144 ']' 00:12:34.145 02:57:13 -- common/autotest_common.sh@940 -- # kill -0 84144 00:12:34.145 02:57:13 -- common/autotest_common.sh@941 -- # uname 00:12:34.145 02:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.145 02:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84144 00:12:34.145 killing process with pid 84144 00:12:34.145 Received shutdown signal, test time was about 1.000000 seconds 00:12:34.145 00:12:34.145 Latency(us) 00:12:34.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:34.145 =================================================================================================================== 00:12:34.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:34.145 02:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:34.145 02:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:34.145 02:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84144' 00:12:34.145 02:57:13 -- common/autotest_common.sh@955 -- # kill 84144 00:12:34.145 02:57:13 -- common/autotest_common.sh@960 -- # wait 84144 00:12:34.404 02:57:13 -- target/tls.sh@17 -- # nvmftestfini 00:12:34.404 02:57:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:34.404 02:57:13 -- nvmf/common.sh@117 -- # sync 00:12:34.404 02:57:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.404 02:57:13 -- nvmf/common.sh@120 -- # set +e 00:12:34.404 02:57:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.404 02:57:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.404 rmmod nvme_tcp 00:12:34.404 rmmod nvme_fabrics 00:12:34.404 rmmod nvme_keyring 00:12:34.404 02:57:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.404 02:57:13 -- nvmf/common.sh@124 -- # set -e 00:12:34.404 02:57:13 -- nvmf/common.sh@125 -- # return 0 00:12:34.404 02:57:13 -- nvmf/common.sh@478 -- # '[' -n 84112 ']' 00:12:34.404 02:57:13 -- nvmf/common.sh@479 -- # killprocess 84112 00:12:34.404 02:57:13 -- common/autotest_common.sh@936 -- # '[' -z 84112 ']' 00:12:34.404 02:57:13 -- common/autotest_common.sh@940 -- # kill -0 84112 00:12:34.404 02:57:13 -- common/autotest_common.sh@941 -- # uname 00:12:34.404 02:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:34.404 02:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84112 00:12:34.404 02:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:34.404 killing process with pid 84112 00:12:34.404 02:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:34.404 02:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84112' 00:12:34.404 02:57:13 -- common/autotest_common.sh@955 -- # kill 84112 00:12:34.404 02:57:13 -- common/autotest_common.sh@960 -- # wait 84112 00:12:34.664 02:57:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:34.664 02:57:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:34.664 02:57:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:34.664 02:57:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.664 02:57:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.664 02:57:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.664 02:57:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.664 02:57:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:34.664 02:57:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:34.664 02:57:13 -- target/tls.sh@18 -- # rm -f /tmp/tmp.YPMR9AQ6U7 /tmp/tmp.AceOazIRZz /tmp/tmp.WyEpbZx3z7 00:12:34.664 00:12:34.664 real 1m16.536s 00:12:34.664 user 2m0.019s 00:12:34.664 sys 0m26.044s 00:12:34.664 ************************************ 00:12:34.664 END TEST nvmf_tls 00:12:34.664 ************************************ 00:12:34.664 02:57:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:34.664 02:57:13 -- common/autotest_common.sh@10 -- # set +x 00:12:34.664 02:57:13 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:34.664 02:57:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:34.664 02:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:34.664 02:57:13 -- common/autotest_common.sh@10 -- # set +x 00:12:34.923 ************************************ 00:12:34.923 START TEST nvmf_fips 00:12:34.923 ************************************ 00:12:34.923 02:57:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:34.923 * Looking for test storage... 00:12:34.923 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:34.923 02:57:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:34.923 02:57:13 -- nvmf/common.sh@7 -- # uname -s 00:12:34.923 02:57:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:34.923 02:57:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:34.923 02:57:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:34.923 02:57:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:34.923 02:57:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:34.923 02:57:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:34.923 02:57:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:34.923 02:57:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:34.923 02:57:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:34.923 02:57:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:34.923 02:57:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:34.923 02:57:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:34.923 02:57:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:34.923 02:57:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:34.923 02:57:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:34.923 02:57:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:34.923 02:57:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.923 02:57:13 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.923 02:57:13 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.924 02:57:13 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.924 02:57:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.924 02:57:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.924 02:57:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.924 02:57:13 -- paths/export.sh@5 -- # export PATH 00:12:34.924 02:57:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.924 02:57:13 -- nvmf/common.sh@47 -- # : 0 00:12:34.924 02:57:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:34.924 02:57:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:34.924 02:57:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:34.924 02:57:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:34.924 02:57:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:34.924 02:57:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:34.924 02:57:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:34.924 02:57:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:34.924 02:57:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:34.924 02:57:13 -- fips/fips.sh@89 -- # check_openssl_version 00:12:34.924 02:57:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:34.924 02:57:13 -- fips/fips.sh@85 -- # openssl version 00:12:34.924 02:57:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:34.924 02:57:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:12:34.924 02:57:13 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:12:34.924 02:57:13 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:12:34.924 02:57:13 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:12:34.924 02:57:13 -- scripts/common.sh@333 -- # IFS=.-: 00:12:34.924 02:57:13 -- scripts/common.sh@333 -- # read -ra ver1 00:12:34.924 02:57:13 -- scripts/common.sh@334 -- # IFS=.-: 00:12:34.924 02:57:13 -- scripts/common.sh@334 -- # read -ra ver2 00:12:34.924 02:57:13 -- scripts/common.sh@335 -- # local 'op=>=' 00:12:34.924 02:57:13 -- scripts/common.sh@337 -- # ver1_l=3 00:12:34.924 02:57:13 -- scripts/common.sh@338 -- # ver2_l=3 00:12:34.924 02:57:13 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:12:34.924 02:57:13 -- scripts/common.sh@341 -- # case "$op" in 00:12:34.924 02:57:13 -- scripts/common.sh@345 -- # : 1 00:12:34.924 02:57:13 -- scripts/common.sh@361 -- # (( v = 0 )) 00:12:34.924 02:57:13 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.924 02:57:13 -- scripts/common.sh@362 -- # decimal 3 00:12:34.924 02:57:13 -- scripts/common.sh@350 -- # local d=3 00:12:34.924 02:57:13 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:13 -- scripts/common.sh@352 -- # echo 3 00:12:34.924 02:57:13 -- scripts/common.sh@362 -- # ver1[v]=3 00:12:34.924 02:57:13 -- scripts/common.sh@363 -- # decimal 3 00:12:34.924 02:57:13 -- scripts/common.sh@350 -- # local d=3 00:12:34.924 02:57:14 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:14 -- scripts/common.sh@352 -- # echo 3 00:12:34.924 02:57:14 -- scripts/common.sh@363 -- # ver2[v]=3 00:12:34.924 02:57:14 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:12:34.924 02:57:14 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:12:34.924 02:57:14 -- scripts/common.sh@361 -- # (( v++ )) 00:12:34.924 02:57:14 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.924 02:57:14 -- scripts/common.sh@362 -- # decimal 0 00:12:34.924 02:57:14 -- scripts/common.sh@350 -- # local d=0 00:12:34.924 02:57:14 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:14 -- scripts/common.sh@352 -- # echo 0 00:12:34.924 02:57:14 -- scripts/common.sh@362 -- # ver1[v]=0 00:12:34.924 02:57:14 -- scripts/common.sh@363 -- # decimal 0 00:12:34.924 02:57:14 -- scripts/common.sh@350 -- # local d=0 00:12:34.924 02:57:14 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:14 -- scripts/common.sh@352 -- # echo 0 00:12:34.924 02:57:14 -- scripts/common.sh@363 -- # ver2[v]=0 00:12:34.924 02:57:14 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:12:34.924 02:57:14 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:12:34.924 02:57:14 -- scripts/common.sh@361 -- # (( v++ )) 00:12:34.924 02:57:14 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.924 02:57:14 -- scripts/common.sh@362 -- # decimal 9 00:12:34.924 02:57:14 -- scripts/common.sh@350 -- # local d=9 00:12:34.924 02:57:14 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:14 -- scripts/common.sh@352 -- # echo 9 00:12:34.924 02:57:14 -- scripts/common.sh@362 -- # ver1[v]=9 00:12:34.924 02:57:14 -- scripts/common.sh@363 -- # decimal 0 00:12:34.924 02:57:14 -- scripts/common.sh@350 -- # local d=0 00:12:34.924 02:57:14 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:34.924 02:57:14 -- scripts/common.sh@352 -- # echo 0 00:12:34.924 02:57:14 -- scripts/common.sh@363 -- # ver2[v]=0 00:12:34.924 02:57:14 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:12:34.924 02:57:14 -- scripts/common.sh@364 -- # return 0 00:12:34.924 02:57:14 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:34.924 02:57:14 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:34.924 02:57:14 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:34.924 02:57:14 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:34.924 02:57:14 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:34.924 02:57:14 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:34.924 02:57:14 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:34.924 02:57:14 -- fips/fips.sh@113 -- # build_openssl_config 00:12:34.924 02:57:14 -- fips/fips.sh@37 -- # cat 00:12:34.924 02:57:14 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:34.924 02:57:14 -- fips/fips.sh@58 -- # cat - 00:12:34.924 02:57:14 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:34.924 02:57:14 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:12:34.924 02:57:14 -- fips/fips.sh@116 -- # mapfile -t providers 00:12:34.924 02:57:14 -- fips/fips.sh@116 -- # openssl list -providers 00:12:34.924 02:57:14 -- fips/fips.sh@116 -- # grep name 00:12:35.183 02:57:14 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:12:35.183 02:57:14 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:12:35.183 02:57:14 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:35.183 02:57:14 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:12:35.183 02:57:14 -- common/autotest_common.sh@638 -- # local es=0 00:12:35.184 02:57:14 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:35.184 02:57:14 -- fips/fips.sh@127 -- # : 00:12:35.184 02:57:14 -- common/autotest_common.sh@626 -- # local arg=openssl 00:12:35.184 02:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:35.184 02:57:14 -- common/autotest_common.sh@630 -- # type -t openssl 00:12:35.184 02:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:35.184 02:57:14 -- common/autotest_common.sh@632 -- # type -P openssl 00:12:35.184 02:57:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:35.184 02:57:14 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:12:35.184 02:57:14 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:12:35.184 02:57:14 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:12:35.184 Error setting digest 00:12:35.184 0022518E147F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:12:35.184 0022518E147F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:12:35.184 02:57:14 -- common/autotest_common.sh@641 -- # es=1 00:12:35.184 02:57:14 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:35.184 02:57:14 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:35.184 02:57:14 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:35.184 02:57:14 -- fips/fips.sh@130 -- # nvmftestinit 00:12:35.184 02:57:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:35.184 02:57:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.184 02:57:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:35.184 02:57:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:35.184 02:57:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:35.184 02:57:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.184 02:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.184 02:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.184 02:57:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:35.184 02:57:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:35.184 02:57:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:35.184 02:57:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:35.184 02:57:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:35.184 02:57:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:35.184 02:57:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.184 02:57:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.184 02:57:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:35.184 02:57:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:35.184 02:57:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:35.184 02:57:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:35.184 02:57:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:35.184 02:57:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.184 02:57:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:35.184 02:57:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:35.184 02:57:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:35.184 02:57:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:35.184 02:57:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:35.184 02:57:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:35.184 Cannot find device "nvmf_tgt_br" 00:12:35.184 02:57:14 -- nvmf/common.sh@155 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:35.184 Cannot find device "nvmf_tgt_br2" 00:12:35.184 02:57:14 -- nvmf/common.sh@156 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:35.184 02:57:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:35.184 Cannot find device "nvmf_tgt_br" 00:12:35.184 02:57:14 -- nvmf/common.sh@158 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:35.184 Cannot find device "nvmf_tgt_br2" 00:12:35.184 02:57:14 -- nvmf/common.sh@159 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:35.184 02:57:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:35.184 02:57:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:35.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.184 02:57:14 -- nvmf/common.sh@162 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:35.184 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:35.184 02:57:14 -- nvmf/common.sh@163 -- # true 00:12:35.184 02:57:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:35.184 02:57:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:35.184 02:57:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:35.184 02:57:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:35.184 02:57:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:35.184 02:57:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:35.184 02:57:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:35.184 02:57:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:35.443 02:57:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:35.443 02:57:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:35.443 02:57:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:35.443 02:57:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:35.443 02:57:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:35.443 02:57:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:35.443 02:57:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:35.443 02:57:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:35.443 02:57:14 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:35.443 02:57:14 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:35.443 02:57:14 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:35.443 02:57:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:35.443 02:57:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:35.443 02:57:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:35.443 02:57:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:35.443 02:57:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:35.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:12:35.443 00:12:35.443 --- 10.0.0.2 ping statistics --- 00:12:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.443 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:35.443 02:57:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:35.443 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:35.443 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:35.443 00:12:35.443 --- 10.0.0.3 ping statistics --- 00:12:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.443 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:35.443 02:57:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:35.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:35.443 00:12:35.443 --- 10.0.0.1 ping statistics --- 00:12:35.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.443 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:35.443 02:57:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.443 02:57:14 -- nvmf/common.sh@422 -- # return 0 00:12:35.443 02:57:14 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:35.443 02:57:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.443 02:57:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:35.443 02:57:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:35.443 02:57:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.443 02:57:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:35.443 02:57:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:35.443 02:57:14 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:12:35.443 02:57:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:35.443 02:57:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:35.443 02:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:35.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.443 02:57:14 -- nvmf/common.sh@470 -- # nvmfpid=84406 00:12:35.443 02:57:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:35.443 02:57:14 -- nvmf/common.sh@471 -- # waitforlisten 84406 00:12:35.443 02:57:14 -- common/autotest_common.sh@817 -- # '[' -z 84406 ']' 00:12:35.443 02:57:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.443 02:57:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:35.443 02:57:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.443 02:57:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:35.443 02:57:14 -- common/autotest_common.sh@10 -- # set +x 00:12:35.443 [2024-04-23 02:57:14.553452] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:35.443 [2024-04-23 02:57:14.553547] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.702 [2024-04-23 02:57:14.675422] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:35.702 [2024-04-23 02:57:14.692841] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.702 [2024-04-23 02:57:14.731469] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.702 [2024-04-23 02:57:14.731524] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.702 [2024-04-23 02:57:14.731537] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.702 [2024-04-23 02:57:14.731547] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.702 [2024-04-23 02:57:14.731556] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.702 [2024-04-23 02:57:14.731594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.638 02:57:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:36.638 02:57:15 -- common/autotest_common.sh@850 -- # return 0 00:12:36.638 02:57:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:36.638 02:57:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:36.638 02:57:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.638 02:57:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.638 02:57:15 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:12:36.638 02:57:15 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:36.638 02:57:15 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:36.638 02:57:15 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:36.638 02:57:15 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:36.638 02:57:15 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:36.638 02:57:15 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:36.638 02:57:15 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:36.897 [2024-04-23 02:57:15.813430] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.897 [2024-04-23 02:57:15.829373] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:36.897 [2024-04-23 02:57:15.829570] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.897 [2024-04-23 02:57:15.855821] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:12:36.897 malloc0 00:12:36.897 02:57:15 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:36.897 02:57:15 -- fips/fips.sh@147 -- # bdevperf_pid=84451 00:12:36.898 02:57:15 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:36.898 02:57:15 -- fips/fips.sh@148 -- # waitforlisten 84451 /var/tmp/bdevperf.sock 00:12:36.898 02:57:15 -- common/autotest_common.sh@817 -- # '[' -z 84451 ']' 00:12:36.898 02:57:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:36.898 02:57:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:36.898 02:57:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:36.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:36.898 02:57:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:36.898 02:57:15 -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 [2024-04-23 02:57:15.965869] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:36.898 [2024-04-23 02:57:15.965957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84451 ] 00:12:37.156 [2024-04-23 02:57:16.088258] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:37.156 [2024-04-23 02:57:16.110201] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.156 [2024-04-23 02:57:16.150280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.724 02:57:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:37.724 02:57:16 -- common/autotest_common.sh@850 -- # return 0 00:12:37.724 02:57:16 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:37.983 [2024-04-23 02:57:17.083541] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:37.983 [2024-04-23 02:57:17.083653] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:12:38.243 TLSTESTn1 00:12:38.243 02:57:17 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:38.243 Running I/O for 10 seconds... 00:12:48.222 00:12:48.222 Latency(us) 00:12:48.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.222 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:48.222 Verification LBA range: start 0x0 length 0x2000 00:12:48.222 TLSTESTn1 : 10.03 4033.61 15.76 0.00 0.00 31670.43 6642.97 20971.52 00:12:48.222 =================================================================================================================== 00:12:48.222 Total : 4033.61 15.76 0.00 0.00 31670.43 6642.97 20971.52 00:12:48.222 0 00:12:48.222 02:57:27 -- fips/fips.sh@1 -- # cleanup 00:12:48.222 02:57:27 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:48.222 02:57:27 -- common/autotest_common.sh@794 -- # type=--id 00:12:48.222 02:57:27 -- common/autotest_common.sh@795 -- # id=0 00:12:48.222 02:57:27 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:12:48.222 02:57:27 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:48.222 02:57:27 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:12:48.222 02:57:27 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:12:48.222 02:57:27 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:12:48.222 02:57:27 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:48.222 nvmf_trace.0 00:12:48.481 02:57:27 -- common/autotest_common.sh@809 -- # return 0 00:12:48.481 02:57:27 -- fips/fips.sh@16 -- # killprocess 84451 00:12:48.481 02:57:27 -- common/autotest_common.sh@936 -- # '[' -z 84451 ']' 00:12:48.481 02:57:27 -- common/autotest_common.sh@940 -- # kill -0 84451 00:12:48.481 02:57:27 -- common/autotest_common.sh@941 -- # uname 00:12:48.481 02:57:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.481 02:57:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84451 00:12:48.481 killing process with pid 84451 00:12:48.481 Received shutdown signal, test time was about 10.000000 seconds 00:12:48.481 00:12:48.481 Latency(us) 00:12:48.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:48.481 =================================================================================================================== 00:12:48.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:48.481 02:57:27 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:12:48.481 02:57:27 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:12:48.481 02:57:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84451' 00:12:48.481 02:57:27 -- common/autotest_common.sh@955 -- # kill 84451 00:12:48.481 [2024-04-23 02:57:27.438628] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:12:48.481 02:57:27 -- common/autotest_common.sh@960 -- # wait 84451 00:12:48.481 02:57:27 -- fips/fips.sh@17 -- # nvmftestfini 00:12:48.481 02:57:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:48.481 02:57:27 -- nvmf/common.sh@117 -- # sync 00:12:48.481 02:57:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:48.481 02:57:27 -- nvmf/common.sh@120 -- # set +e 00:12:48.481 02:57:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:48.481 02:57:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:48.481 rmmod nvme_tcp 00:12:48.740 rmmod nvme_fabrics 00:12:48.740 rmmod nvme_keyring 00:12:48.740 02:57:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:48.740 02:57:27 -- nvmf/common.sh@124 -- # set -e 00:12:48.740 02:57:27 -- nvmf/common.sh@125 -- # return 0 00:12:48.740 02:57:27 -- nvmf/common.sh@478 -- # '[' -n 84406 ']' 00:12:48.740 02:57:27 -- nvmf/common.sh@479 -- # killprocess 84406 00:12:48.740 02:57:27 -- common/autotest_common.sh@936 -- # '[' -z 84406 ']' 00:12:48.740 02:57:27 -- common/autotest_common.sh@940 -- # kill -0 84406 00:12:48.740 02:57:27 -- common/autotest_common.sh@941 -- # uname 00:12:48.740 02:57:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:48.740 02:57:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84406 00:12:48.740 killing process with pid 84406 00:12:48.740 02:57:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:12:48.740 02:57:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:12:48.740 02:57:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84406' 00:12:48.740 02:57:27 -- common/autotest_common.sh@955 -- # kill 84406 00:12:48.740 [2024-04-23 02:57:27.725647] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:12:48.740 02:57:27 -- common/autotest_common.sh@960 -- # wait 84406 00:12:48.740 02:57:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:48.740 02:57:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:48.740 02:57:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:48.740 02:57:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.740 02:57:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:48.740 02:57:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.740 02:57:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.740 02:57:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.999 02:57:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:48.999 02:57:27 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:48.999 ************************************ 00:12:48.999 END TEST nvmf_fips 00:12:48.999 ************************************ 00:12:48.999 00:12:48.999 real 0m14.049s 00:12:48.999 user 0m19.018s 00:12:48.999 sys 0m5.721s 00:12:48.999 02:57:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:48.999 02:57:27 -- common/autotest_common.sh@10 -- # set +x 00:12:48.999 02:57:27 -- nvmf/nvmf.sh@64 -- # '[' 1 -eq 1 ']' 00:12:48.999 02:57:27 -- nvmf/nvmf.sh@65 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:48.999 02:57:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:48.999 02:57:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:48.999 02:57:27 -- common/autotest_common.sh@10 -- # set +x 00:12:48.999 ************************************ 00:12:48.999 START TEST nvmf_fuzz 00:12:48.999 ************************************ 00:12:48.999 02:57:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:48.999 * Looking for test storage... 00:12:48.999 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.999 02:57:28 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.999 02:57:28 -- nvmf/common.sh@7 -- # uname -s 00:12:48.999 02:57:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.999 02:57:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.999 02:57:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.999 02:57:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.999 02:57:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.999 02:57:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.999 02:57:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.999 02:57:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.999 02:57:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.999 02:57:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:48.999 02:57:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:48.999 02:57:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.999 02:57:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.999 02:57:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.999 02:57:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:48.999 02:57:28 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.999 02:57:28 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.999 02:57:28 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.999 02:57:28 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.999 02:57:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.999 02:57:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.999 02:57:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.999 02:57:28 -- paths/export.sh@5 -- # export PATH 00:12:48.999 02:57:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.999 02:57:28 -- nvmf/common.sh@47 -- # : 0 00:12:48.999 02:57:28 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:48.999 02:57:28 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:48.999 02:57:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:48.999 02:57:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.999 02:57:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.999 02:57:28 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:48.999 02:57:28 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:48.999 02:57:28 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:48.999 02:57:28 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:48.999 02:57:28 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:48.999 02:57:28 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.999 02:57:28 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:48.999 02:57:28 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:48.999 02:57:28 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:48.999 02:57:28 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.999 02:57:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.999 02:57:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.999 02:57:28 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:48.999 02:57:28 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:48.999 02:57:28 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.999 02:57:28 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.999 02:57:28 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.999 02:57:28 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:49.000 02:57:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:49.000 02:57:28 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:49.000 02:57:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:49.000 02:57:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.000 02:57:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:49.000 02:57:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:49.000 02:57:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:49.000 02:57:28 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:49.000 02:57:28 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:49.000 02:57:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:49.258 Cannot find device "nvmf_tgt_br" 00:12:49.258 02:57:28 -- nvmf/common.sh@155 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:49.258 Cannot find device "nvmf_tgt_br2" 00:12:49.258 02:57:28 -- nvmf/common.sh@156 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:49.258 02:57:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:49.258 Cannot find device "nvmf_tgt_br" 00:12:49.258 02:57:28 -- nvmf/common.sh@158 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:49.258 Cannot find device "nvmf_tgt_br2" 00:12:49.258 02:57:28 -- nvmf/common.sh@159 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:49.258 02:57:28 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:49.258 02:57:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.258 02:57:28 -- nvmf/common.sh@162 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.258 02:57:28 -- nvmf/common.sh@163 -- # true 00:12:49.258 02:57:28 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.258 02:57:28 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.258 02:57:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.258 02:57:28 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.258 02:57:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.258 02:57:28 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.258 02:57:28 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.258 02:57:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.258 02:57:28 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.258 02:57:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:49.258 02:57:28 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:49.258 02:57:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:49.258 02:57:28 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:49.258 02:57:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.258 02:57:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.258 02:57:28 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.258 02:57:28 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:49.258 02:57:28 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:49.258 02:57:28 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.516 02:57:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.516 02:57:28 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.516 02:57:28 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.516 02:57:28 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.516 02:57:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:49.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.097 ms 00:12:49.516 00:12:49.516 --- 10.0.0.2 ping statistics --- 00:12:49.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.516 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:12:49.516 02:57:28 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:49.516 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.516 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:12:49.516 00:12:49.516 --- 10.0.0.3 ping statistics --- 00:12:49.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.516 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:12:49.516 02:57:28 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.517 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.517 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:12:49.517 00:12:49.517 --- 10.0.0.1 ping statistics --- 00:12:49.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.517 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:12:49.517 02:57:28 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.517 02:57:28 -- nvmf/common.sh@422 -- # return 0 00:12:49.517 02:57:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:49.517 02:57:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.517 02:57:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:49.517 02:57:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:49.517 02:57:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.517 02:57:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:49.517 02:57:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:49.517 02:57:28 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=84776 00:12:49.517 02:57:28 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.517 02:57:28 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 84776 00:12:49.517 02:57:28 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:49.517 02:57:28 -- common/autotest_common.sh@817 -- # '[' -z 84776 ']' 00:12:49.517 02:57:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.517 02:57:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:49.517 02:57:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.517 02:57:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:49.517 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 02:57:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:49.775 02:57:28 -- common/autotest_common.sh@850 -- # return 0 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:49.775 02:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.775 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 02:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:49.775 02:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.775 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 Malloc0 00:12:49.775 02:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:49.775 02:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.775 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 02:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:49.775 02:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.775 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 02:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.775 02:57:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:49.775 02:57:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.775 02:57:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:49.775 02:57:28 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:50.034 Shutting down the fuzz application 00:12:50.034 02:57:29 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:50.292 Shutting down the fuzz application 00:12:50.292 02:57:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.292 02:57:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:50.292 02:57:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.292 02:57:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:50.292 02:57:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:50.292 02:57:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:50.292 02:57:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:50.292 02:57:29 -- nvmf/common.sh@117 -- # sync 00:12:50.550 02:57:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.550 02:57:29 -- nvmf/common.sh@120 -- # set +e 00:12:50.550 02:57:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.550 02:57:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.550 rmmod nvme_tcp 00:12:50.550 rmmod nvme_fabrics 00:12:50.550 rmmod nvme_keyring 00:12:50.550 02:57:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.550 02:57:29 -- nvmf/common.sh@124 -- # set -e 00:12:50.550 02:57:29 -- nvmf/common.sh@125 -- # return 0 00:12:50.550 02:57:29 -- nvmf/common.sh@478 -- # '[' -n 84776 ']' 00:12:50.550 02:57:29 -- nvmf/common.sh@479 -- # killprocess 84776 00:12:50.550 02:57:29 -- common/autotest_common.sh@936 -- # '[' -z 84776 ']' 00:12:50.550 02:57:29 -- common/autotest_common.sh@940 -- # kill -0 84776 00:12:50.550 02:57:29 -- common/autotest_common.sh@941 -- # uname 00:12:50.550 02:57:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.550 02:57:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84776 00:12:50.550 killing process with pid 84776 00:12:50.550 02:57:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.550 02:57:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.550 02:57:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84776' 00:12:50.550 02:57:29 -- common/autotest_common.sh@955 -- # kill 84776 00:12:50.550 02:57:29 -- common/autotest_common.sh@960 -- # wait 84776 00:12:50.807 02:57:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:50.807 02:57:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:50.807 02:57:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:50.807 02:57:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.807 02:57:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.807 02:57:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.807 02:57:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.807 02:57:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:50.807 02:57:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:12:50.807 02:57:29 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:50.807 00:12:50.807 real 0m1.751s 00:12:50.807 user 0m1.594s 00:12:50.807 sys 0m0.533s 00:12:50.807 ************************************ 00:12:50.807 END TEST nvmf_fuzz 00:12:50.807 ************************************ 00:12:50.807 02:57:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:50.807 02:57:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.807 02:57:29 -- nvmf/nvmf.sh@66 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:50.807 02:57:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:50.807 02:57:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:50.807 02:57:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.807 ************************************ 00:12:50.807 START TEST nvmf_multiconnection 00:12:50.807 ************************************ 00:12:50.807 02:57:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:50.807 * Looking for test storage... 00:12:50.807 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:50.807 02:57:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:50.807 02:57:29 -- nvmf/common.sh@7 -- # uname -s 00:12:50.807 02:57:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:50.807 02:57:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:50.807 02:57:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:50.807 02:57:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:50.807 02:57:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:50.807 02:57:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:50.807 02:57:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:50.807 02:57:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:50.807 02:57:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:50.807 02:57:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:51.066 02:57:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:51.066 02:57:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:12:51.066 02:57:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:51.066 02:57:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:51.066 02:57:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:51.066 02:57:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:51.066 02:57:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:51.066 02:57:29 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:51.066 02:57:29 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:51.066 02:57:29 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:51.066 02:57:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.066 02:57:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.066 02:57:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.066 02:57:29 -- paths/export.sh@5 -- # export PATH 00:12:51.066 02:57:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:51.066 02:57:29 -- nvmf/common.sh@47 -- # : 0 00:12:51.066 02:57:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:51.066 02:57:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:51.066 02:57:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:51.066 02:57:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:51.066 02:57:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:51.066 02:57:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:51.066 02:57:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:51.066 02:57:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:51.066 02:57:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:51.066 02:57:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:51.066 02:57:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:51.066 02:57:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:51.066 02:57:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:51.066 02:57:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:51.066 02:57:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:51.066 02:57:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:51.066 02:57:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:51.066 02:57:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.066 02:57:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:51.066 02:57:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:51.066 02:57:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:12:51.066 02:57:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:12:51.066 02:57:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:12:51.066 02:57:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:12:51.066 02:57:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:12:51.066 02:57:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:12:51.066 02:57:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.066 02:57:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.066 02:57:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:51.066 02:57:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:12:51.066 02:57:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:51.066 02:57:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:51.066 02:57:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:51.066 02:57:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.066 02:57:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:51.066 02:57:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:51.066 02:57:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:51.066 02:57:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:51.066 02:57:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:12:51.066 02:57:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:12:51.066 Cannot find device "nvmf_tgt_br" 00:12:51.066 02:57:30 -- nvmf/common.sh@155 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:12:51.066 Cannot find device "nvmf_tgt_br2" 00:12:51.066 02:57:30 -- nvmf/common.sh@156 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:12:51.066 02:57:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:12:51.066 Cannot find device "nvmf_tgt_br" 00:12:51.066 02:57:30 -- nvmf/common.sh@158 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:12:51.066 Cannot find device "nvmf_tgt_br2" 00:12:51.066 02:57:30 -- nvmf/common.sh@159 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:12:51.066 02:57:30 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:12:51.066 02:57:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:51.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.066 02:57:30 -- nvmf/common.sh@162 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:51.066 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:51.066 02:57:30 -- nvmf/common.sh@163 -- # true 00:12:51.066 02:57:30 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:12:51.066 02:57:30 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:51.066 02:57:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:51.066 02:57:30 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:51.066 02:57:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:51.066 02:57:30 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:51.324 02:57:30 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:51.324 02:57:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:51.324 02:57:30 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:51.324 02:57:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:12:51.324 02:57:30 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:12:51.324 02:57:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:12:51.324 02:57:30 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:12:51.324 02:57:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:51.324 02:57:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:51.324 02:57:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:51.324 02:57:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:12:51.324 02:57:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:12:51.324 02:57:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:12:51.324 02:57:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:51.324 02:57:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:51.324 02:57:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:51.324 02:57:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:51.325 02:57:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:12:51.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:12:51.325 00:12:51.325 --- 10.0.0.2 ping statistics --- 00:12:51.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.325 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:12:51.325 02:57:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:12:51.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:51.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:12:51.325 00:12:51.325 --- 10.0.0.3 ping statistics --- 00:12:51.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.325 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:12:51.325 02:57:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:51.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:51.325 00:12:51.325 --- 10.0.0.1 ping statistics --- 00:12:51.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.325 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:51.325 02:57:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.325 02:57:30 -- nvmf/common.sh@422 -- # return 0 00:12:51.325 02:57:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:51.325 02:57:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.325 02:57:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:51.325 02:57:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:51.325 02:57:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.325 02:57:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:51.325 02:57:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:51.325 02:57:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:51.325 02:57:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:51.325 02:57:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:51.325 02:57:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.325 02:57:30 -- nvmf/common.sh@470 -- # nvmfpid=84961 00:12:51.325 02:57:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.325 02:57:30 -- nvmf/common.sh@471 -- # waitforlisten 84961 00:12:51.325 02:57:30 -- common/autotest_common.sh@817 -- # '[' -z 84961 ']' 00:12:51.325 02:57:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.325 02:57:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:51.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.325 02:57:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.325 02:57:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:51.325 02:57:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.325 [2024-04-23 02:57:30.445550] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:12:51.325 [2024-04-23 02:57:30.445648] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.584 [2024-04-23 02:57:30.569210] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:12:51.584 [2024-04-23 02:57:30.585750] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.584 [2024-04-23 02:57:30.628349] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.584 [2024-04-23 02:57:30.628403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.584 [2024-04-23 02:57:30.628414] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.584 [2024-04-23 02:57:30.628423] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.584 [2024-04-23 02:57:30.628433] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.584 [2024-04-23 02:57:30.628577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.584 [2024-04-23 02:57:30.628692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.584 [2024-04-23 02:57:30.628775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:51.584 [2024-04-23 02:57:30.628779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.521 02:57:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:52.521 02:57:31 -- common/autotest_common.sh@850 -- # return 0 00:12:52.521 02:57:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:52.521 02:57:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.521 02:57:31 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 [2024-04-23 02:57:31.407575] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:52.521 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.521 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 Malloc1 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 [2024-04-23 02:57:31.476367] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.521 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 Malloc2 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.521 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.521 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:52.521 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.521 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.521 Malloc3 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.522 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 Malloc4 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.522 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 Malloc5 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.522 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 Malloc6 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.522 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.522 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.522 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:52.522 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.522 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.793 Malloc7 00:12:52.793 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.793 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:52.793 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.793 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.793 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.793 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:52.793 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.793 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.793 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.793 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:52.793 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.793 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.793 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.793 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.793 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 Malloc8 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.794 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 Malloc9 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.794 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 Malloc10 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.794 02:57:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 Malloc11 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:52.794 02:57:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.794 02:57:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.794 02:57:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.794 02:57:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:52.794 02:57:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:52.794 02:57:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:53.102 02:57:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:53.102 02:57:32 -- common/autotest_common.sh@1184 -- # local i=0 00:12:53.102 02:57:32 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.102 02:57:32 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:53.102 02:57:32 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:55.011 02:57:34 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:55.011 02:57:34 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:55.011 02:57:34 -- common/autotest_common.sh@1193 -- # grep -c SPDK1 00:12:55.011 02:57:34 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:55.011 02:57:34 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.011 02:57:34 -- common/autotest_common.sh@1194 -- # return 0 00:12:55.011 02:57:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:55.011 02:57:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:55.011 02:57:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:55.011 02:57:34 -- common/autotest_common.sh@1184 -- # local i=0 00:12:55.011 02:57:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.011 02:57:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:55.011 02:57:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:57.540 02:57:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:57.540 02:57:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:57.540 02:57:36 -- common/autotest_common.sh@1193 -- # grep -c SPDK2 00:12:57.540 02:57:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:57.540 02:57:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.540 02:57:36 -- common/autotest_common.sh@1194 -- # return 0 00:12:57.540 02:57:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:57.540 02:57:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:57.540 02:57:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:57.540 02:57:36 -- common/autotest_common.sh@1184 -- # local i=0 00:12:57.540 02:57:36 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:57.540 02:57:36 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:57.540 02:57:36 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:59.442 02:57:38 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:59.442 02:57:38 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:59.442 02:57:38 -- common/autotest_common.sh@1193 -- # grep -c SPDK3 00:12:59.442 02:57:38 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:59.442 02:57:38 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.442 02:57:38 -- common/autotest_common.sh@1194 -- # return 0 00:12:59.442 02:57:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:59.442 02:57:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:59.442 02:57:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:59.442 02:57:38 -- common/autotest_common.sh@1184 -- # local i=0 00:12:59.442 02:57:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.442 02:57:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:59.442 02:57:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:01.346 02:57:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:01.346 02:57:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:01.346 02:57:40 -- common/autotest_common.sh@1193 -- # grep -c SPDK4 00:13:01.346 02:57:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:01.346 02:57:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.346 02:57:40 -- common/autotest_common.sh@1194 -- # return 0 00:13:01.346 02:57:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:01.346 02:57:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:13:01.605 02:57:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:13:01.605 02:57:40 -- common/autotest_common.sh@1184 -- # local i=0 00:13:01.605 02:57:40 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.605 02:57:40 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:01.605 02:57:40 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:03.511 02:57:42 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:03.511 02:57:42 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:03.511 02:57:42 -- common/autotest_common.sh@1193 -- # grep -c SPDK5 00:13:03.511 02:57:42 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:03.511 02:57:42 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.511 02:57:42 -- common/autotest_common.sh@1194 -- # return 0 00:13:03.511 02:57:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:03.511 02:57:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:13:03.769 02:57:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:13:03.769 02:57:42 -- common/autotest_common.sh@1184 -- # local i=0 00:13:03.769 02:57:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.769 02:57:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:03.769 02:57:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:05.669 02:57:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:05.669 02:57:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:05.669 02:57:44 -- common/autotest_common.sh@1193 -- # grep -c SPDK6 00:13:05.669 02:57:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:05.669 02:57:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.669 02:57:44 -- common/autotest_common.sh@1194 -- # return 0 00:13:05.669 02:57:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:05.669 02:57:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:13:05.928 02:57:44 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:13:05.928 02:57:44 -- common/autotest_common.sh@1184 -- # local i=0 00:13:05.928 02:57:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.928 02:57:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:05.928 02:57:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:07.829 02:57:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:07.829 02:57:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:07.829 02:57:46 -- common/autotest_common.sh@1193 -- # grep -c SPDK7 00:13:07.829 02:57:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:07.829 02:57:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:07.829 02:57:46 -- common/autotest_common.sh@1194 -- # return 0 00:13:07.829 02:57:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:07.829 02:57:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:13:08.087 02:57:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:13:08.087 02:57:47 -- common/autotest_common.sh@1184 -- # local i=0 00:13:08.087 02:57:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:08.087 02:57:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:08.087 02:57:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:10.012 02:57:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:10.012 02:57:49 -- common/autotest_common.sh@1193 -- # grep -c SPDK8 00:13:10.012 02:57:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:10.012 02:57:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:10.012 02:57:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:10.012 02:57:49 -- common/autotest_common.sh@1194 -- # return 0 00:13:10.012 02:57:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:10.012 02:57:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:13:10.270 02:57:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:13:10.270 02:57:49 -- common/autotest_common.sh@1184 -- # local i=0 00:13:10.270 02:57:49 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.270 02:57:49 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:10.270 02:57:49 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:12.170 02:57:51 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:12.170 02:57:51 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:12.170 02:57:51 -- common/autotest_common.sh@1193 -- # grep -c SPDK9 00:13:12.170 02:57:51 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:12.170 02:57:51 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.170 02:57:51 -- common/autotest_common.sh@1194 -- # return 0 00:13:12.170 02:57:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:12.170 02:57:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:12.429 02:57:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:12.429 02:57:51 -- common/autotest_common.sh@1184 -- # local i=0 00:13:12.429 02:57:51 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.429 02:57:51 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:12.429 02:57:51 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:14.330 02:57:53 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:14.330 02:57:53 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:14.330 02:57:53 -- common/autotest_common.sh@1193 -- # grep -c SPDK10 00:13:14.330 02:57:53 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:14.330 02:57:53 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:14.330 02:57:53 -- common/autotest_common.sh@1194 -- # return 0 00:13:14.330 02:57:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:14.330 02:57:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:14.589 02:57:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:14.589 02:57:53 -- common/autotest_common.sh@1184 -- # local i=0 00:13:14.589 02:57:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.589 02:57:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:14.589 02:57:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:16.489 02:57:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:16.489 02:57:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:16.489 02:57:55 -- common/autotest_common.sh@1193 -- # grep -c SPDK11 00:13:16.489 02:57:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:16.489 02:57:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.489 02:57:55 -- common/autotest_common.sh@1194 -- # return 0 00:13:16.489 02:57:55 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:16.748 [global] 00:13:16.748 thread=1 00:13:16.748 invalidate=1 00:13:16.748 rw=read 00:13:16.748 time_based=1 00:13:16.748 runtime=10 00:13:16.748 ioengine=libaio 00:13:16.748 direct=1 00:13:16.748 bs=262144 00:13:16.748 iodepth=64 00:13:16.748 norandommap=1 00:13:16.748 numjobs=1 00:13:16.748 00:13:16.748 [job0] 00:13:16.748 filename=/dev/nvme0n1 00:13:16.748 [job1] 00:13:16.748 filename=/dev/nvme10n1 00:13:16.748 [job2] 00:13:16.748 filename=/dev/nvme1n1 00:13:16.748 [job3] 00:13:16.748 filename=/dev/nvme2n1 00:13:16.748 [job4] 00:13:16.748 filename=/dev/nvme3n1 00:13:16.748 [job5] 00:13:16.749 filename=/dev/nvme4n1 00:13:16.749 [job6] 00:13:16.749 filename=/dev/nvme5n1 00:13:16.749 [job7] 00:13:16.749 filename=/dev/nvme6n1 00:13:16.749 [job8] 00:13:16.749 filename=/dev/nvme7n1 00:13:16.749 [job9] 00:13:16.749 filename=/dev/nvme8n1 00:13:16.749 [job10] 00:13:16.749 filename=/dev/nvme9n1 00:13:16.749 Could not set queue depth (nvme0n1) 00:13:16.749 Could not set queue depth (nvme10n1) 00:13:16.749 Could not set queue depth (nvme1n1) 00:13:16.749 Could not set queue depth (nvme2n1) 00:13:16.749 Could not set queue depth (nvme3n1) 00:13:16.749 Could not set queue depth (nvme4n1) 00:13:16.749 Could not set queue depth (nvme5n1) 00:13:16.749 Could not set queue depth (nvme6n1) 00:13:16.749 Could not set queue depth (nvme7n1) 00:13:16.749 Could not set queue depth (nvme8n1) 00:13:16.749 Could not set queue depth (nvme9n1) 00:13:17.008 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.008 fio-3.35 00:13:17.008 Starting 11 threads 00:13:29.230 00:13:29.230 job0: (groupid=0, jobs=1): err= 0: pid=85414: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=1177, BW=294MiB/s (309MB/s)(2950MiB/10025msec) 00:13:29.230 slat (usec): min=17, max=25681, avg=842.74, stdev=1932.02 00:13:29.230 clat (msec): min=9, max=110, avg=53.46, stdev=15.96 00:13:29.230 lat (msec): min=10, max=114, avg=54.30, stdev=16.18 00:13:29.230 clat percentiles (msec): 00:13:29.230 | 1.00th=[ 31], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:13:29.230 | 30.00th=[ 36], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:13:29.230 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 69], 95.00th=[ 72], 00:13:29.230 | 99.00th=[ 90], 99.50th=[ 99], 99.90th=[ 105], 99.95th=[ 108], 00:13:29.230 | 99.99th=[ 111] 00:13:29.230 bw ( KiB/s): min=201728, max=485376, per=15.19%, avg=300467.70, stdev=94730.47, samples=20 00:13:29.230 iops : min= 788, max= 1896, avg=1173.70, stdev=370.04, samples=20 00:13:29.230 lat (msec) : 10=0.01%, 20=0.05%, 50=37.28%, 100=62.29%, 250=0.36% 00:13:29.230 cpu : usr=0.56%, sys=4.31%, ctx=2471, majf=0, minf=4097 00:13:29.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:29.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.230 issued rwts: total=11801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.230 job1: (groupid=0, jobs=1): err= 0: pid=85415: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=1764, BW=441MiB/s (463MB/s)(4417MiB/10012msec) 00:13:29.230 slat (usec): min=21, max=30176, avg=562.10, stdev=1159.07 00:13:29.230 clat (usec): min=8567, max=61641, avg=35663.78, stdev=2454.02 00:13:29.230 lat (usec): min=12154, max=61760, avg=36225.88, stdev=2458.43 00:13:29.230 clat percentiles (usec): 00:13:29.230 | 1.00th=[30540], 5.00th=[32375], 10.00th=[33162], 20.00th=[33817], 00:13:29.230 | 30.00th=[34866], 40.00th=[34866], 50.00th=[35390], 60.00th=[35914], 00:13:29.230 | 70.00th=[36439], 80.00th=[37487], 90.00th=[38011], 95.00th=[39060], 00:13:29.230 | 99.00th=[41681], 99.50th=[44827], 99.90th=[56361], 99.95th=[58459], 00:13:29.230 | 99.99th=[61604] 00:13:29.230 bw ( KiB/s): min=407040, max=470016, per=22.78%, avg=450590.80, stdev=14517.84, samples=20 00:13:29.230 iops : min= 1590, max= 1836, avg=1760.20, stdev=56.71, samples=20 00:13:29.230 lat (msec) : 10=0.01%, 20=0.13%, 50=99.70%, 100=0.16% 00:13:29.230 cpu : usr=0.75%, sys=5.59%, ctx=3716, majf=0, minf=4097 00:13:29.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:13:29.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.230 issued rwts: total=17667,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.230 job2: (groupid=0, jobs=1): err= 0: pid=85420: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=414, BW=104MiB/s (109MB/s)(1047MiB/10105msec) 00:13:29.230 slat (usec): min=19, max=82386, avg=2384.01, stdev=5538.35 00:13:29.230 clat (msec): min=73, max=254, avg=151.90, stdev=11.85 00:13:29.230 lat (msec): min=73, max=254, avg=154.29, stdev=12.45 00:13:29.230 clat percentiles (msec): 00:13:29.230 | 1.00th=[ 112], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 146], 00:13:29.230 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:13:29.230 | 70.00th=[ 155], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:13:29.230 | 99.00th=[ 182], 99.50th=[ 201], 99.90th=[ 249], 99.95th=[ 249], 00:13:29.230 | 99.99th=[ 255] 00:13:29.230 bw ( KiB/s): min=99840, max=110080, per=5.34%, avg=105559.40, stdev=2906.24, samples=20 00:13:29.230 iops : min= 390, max= 430, avg=412.20, stdev=11.38, samples=20 00:13:29.230 lat (msec) : 100=0.50%, 250=99.47%, 500=0.02% 00:13:29.230 cpu : usr=0.23%, sys=1.40%, ctx=1061, majf=0, minf=4097 00:13:29.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:29.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.230 issued rwts: total=4186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.230 job3: (groupid=0, jobs=1): err= 0: pid=85421: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=411, BW=103MiB/s (108MB/s)(1040MiB/10109msec) 00:13:29.230 slat (usec): min=17, max=85294, avg=2400.10, stdev=6061.52 00:13:29.230 clat (msec): min=99, max=268, avg=152.86, stdev=11.46 00:13:29.230 lat (msec): min=99, max=268, avg=155.26, stdev=12.26 00:13:29.230 clat percentiles (msec): 00:13:29.230 | 1.00th=[ 130], 5.00th=[ 140], 10.00th=[ 144], 20.00th=[ 146], 00:13:29.230 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:13:29.230 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:13:29.230 | 99.00th=[ 199], 99.50th=[ 218], 99.90th=[ 247], 99.95th=[ 247], 00:13:29.230 | 99.99th=[ 268] 00:13:29.230 bw ( KiB/s): min=96448, max=111616, per=5.30%, avg=104856.65, stdev=4208.84, samples=20 00:13:29.230 iops : min= 376, max= 436, avg=409.55, stdev=16.52, samples=20 00:13:29.230 lat (msec) : 100=0.05%, 250=99.93%, 500=0.02% 00:13:29.230 cpu : usr=0.22%, sys=1.58%, ctx=1051, majf=0, minf=4097 00:13:29.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:29.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.230 issued rwts: total=4159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.230 job4: (groupid=0, jobs=1): err= 0: pid=85422: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=414, BW=104MiB/s (109MB/s)(1047MiB/10113msec) 00:13:29.230 slat (usec): min=22, max=119377, avg=2382.22, stdev=5950.15 00:13:29.230 clat (msec): min=10, max=265, avg=151.96, stdev=15.20 00:13:29.230 lat (msec): min=14, max=265, avg=154.34, stdev=15.82 00:13:29.230 clat percentiles (msec): 00:13:29.230 | 1.00th=[ 81], 5.00th=[ 138], 10.00th=[ 142], 20.00th=[ 146], 00:13:29.230 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:13:29.230 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 163], 95.00th=[ 167], 00:13:29.230 | 99.00th=[ 192], 99.50th=[ 215], 99.90th=[ 259], 99.95th=[ 259], 00:13:29.230 | 99.99th=[ 266] 00:13:29.230 bw ( KiB/s): min=98304, max=111616, per=5.34%, avg=105589.20, stdev=3488.23, samples=20 00:13:29.230 iops : min= 384, max= 436, avg=412.45, stdev=13.62, samples=20 00:13:29.230 lat (msec) : 20=0.14%, 100=1.50%, 250=98.21%, 500=0.14% 00:13:29.230 cpu : usr=0.25%, sys=1.80%, ctx=985, majf=0, minf=4097 00:13:29.230 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:29.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.230 issued rwts: total=4188,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.230 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.230 job5: (groupid=0, jobs=1): err= 0: pid=85423: Tue Apr 23 02:58:06 2024 00:13:29.230 read: IOPS=873, BW=218MiB/s (229MB/s)(2188MiB/10018msec) 00:13:29.230 slat (usec): min=18, max=37685, avg=1127.74, stdev=2641.53 00:13:29.231 clat (msec): min=4, max=146, avg=72.01, stdev=20.72 00:13:29.231 lat (msec): min=4, max=148, avg=73.14, stdev=21.01 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 33], 5.00th=[ 57], 10.00th=[ 59], 20.00th=[ 62], 00:13:29.231 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 66], 60.00th=[ 67], 00:13:29.231 | 70.00th=[ 69], 80.00th=[ 72], 90.00th=[ 113], 95.00th=[ 123], 00:13:29.231 | 99.00th=[ 133], 99.50th=[ 138], 99.90th=[ 142], 99.95th=[ 142], 00:13:29.231 | 99.99th=[ 146] 00:13:29.231 bw ( KiB/s): min=133120, max=259584, per=11.24%, avg=222413.50, stdev=47404.94, samples=20 00:13:29.231 iops : min= 520, max= 1014, avg=868.75, stdev=185.14, samples=20 00:13:29.231 lat (msec) : 10=0.21%, 20=0.26%, 50=1.29%, 100=83.80%, 250=14.44% 00:13:29.231 cpu : usr=0.32%, sys=3.04%, ctx=1967, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=8752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 job6: (groupid=0, jobs=1): err= 0: pid=85424: Tue Apr 23 02:58:06 2024 00:13:29.231 read: IOPS=638, BW=160MiB/s (167MB/s)(1607MiB/10074msec) 00:13:29.231 slat (usec): min=18, max=33565, avg=1545.71, stdev=3313.73 00:13:29.231 clat (msec): min=20, max=169, avg=98.54, stdev=14.10 00:13:29.231 lat (msec): min=20, max=169, avg=100.09, stdev=14.28 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 79], 5.00th=[ 87], 10.00th=[ 88], 20.00th=[ 90], 00:13:29.231 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 97], 00:13:29.231 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 118], 95.00th=[ 129], 00:13:29.231 | 99.00th=[ 140], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 165], 00:13:29.231 | 99.99th=[ 169] 00:13:29.231 bw ( KiB/s): min=129536, max=178176, per=8.24%, avg=162927.60, stdev=16178.88, samples=20 00:13:29.231 iops : min= 506, max= 696, avg=636.40, stdev=63.20, samples=20 00:13:29.231 lat (msec) : 50=0.78%, 100=69.63%, 250=29.59% 00:13:29.231 cpu : usr=0.29%, sys=2.54%, ctx=1539, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=6428,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 job7: (groupid=0, jobs=1): err= 0: pid=85425: Tue Apr 23 02:58:06 2024 00:13:29.231 read: IOPS=405, BW=101MiB/s (106MB/s)(1024MiB/10112msec) 00:13:29.231 slat (usec): min=17, max=129528, avg=2437.06, stdev=6218.14 00:13:29.231 clat (msec): min=107, max=253, avg=155.20, stdev=11.67 00:13:29.231 lat (msec): min=123, max=265, avg=157.64, stdev=12.17 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 136], 5.00th=[ 142], 10.00th=[ 146], 20.00th=[ 148], 00:13:29.231 | 30.00th=[ 150], 40.00th=[ 153], 50.00th=[ 153], 60.00th=[ 155], 00:13:29.231 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 169], 95.00th=[ 176], 00:13:29.231 | 99.00th=[ 190], 99.50th=[ 194], 99.90th=[ 245], 99.95th=[ 253], 00:13:29.231 | 99.99th=[ 253] 00:13:29.231 bw ( KiB/s): min=75927, max=111616, per=5.22%, avg=103267.25, stdev=7602.07, samples=20 00:13:29.231 iops : min= 296, max= 436, avg=403.35, stdev=29.80, samples=20 00:13:29.231 lat (msec) : 250=99.93%, 500=0.07% 00:13:29.231 cpu : usr=0.27%, sys=1.40%, ctx=1024, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=4097,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 job8: (groupid=0, jobs=1): err= 0: pid=85426: Tue Apr 23 02:58:06 2024 00:13:29.231 read: IOPS=631, BW=158MiB/s (166MB/s)(1590MiB/10073msec) 00:13:29.231 slat (usec): min=17, max=38155, avg=1566.55, stdev=3371.98 00:13:29.231 clat (msec): min=25, max=162, avg=99.58, stdev=13.95 00:13:29.231 lat (msec): min=27, max=172, avg=101.14, stdev=14.12 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 84], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 00:13:29.231 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 95], 60.00th=[ 97], 00:13:29.231 | 70.00th=[ 101], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 132], 00:13:29.231 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 163], 99.95th=[ 163], 00:13:29.231 | 99.99th=[ 163] 00:13:29.231 bw ( KiB/s): min=118509, max=175616, per=8.15%, avg=161198.30, stdev=18226.38, samples=20 00:13:29.231 iops : min= 462, max= 686, avg=629.60, stdev=71.30, samples=20 00:13:29.231 lat (msec) : 50=0.42%, 100=69.25%, 250=30.33% 00:13:29.231 cpu : usr=0.43%, sys=2.83%, ctx=1532, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=6360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 job9: (groupid=0, jobs=1): err= 0: pid=85427: Tue Apr 23 02:58:06 2024 00:13:29.231 read: IOPS=412, BW=103MiB/s (108MB/s)(1043MiB/10109msec) 00:13:29.231 slat (usec): min=17, max=102272, avg=2391.12, stdev=5578.16 00:13:29.231 clat (msec): min=42, max=261, avg=152.35, stdev=15.25 00:13:29.231 lat (msec): min=42, max=261, avg=154.74, stdev=15.67 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 61], 5.00th=[ 140], 10.00th=[ 144], 20.00th=[ 146], 00:13:29.231 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 153], 60.00th=[ 155], 00:13:29.231 | 70.00th=[ 157], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 169], 00:13:29.231 | 99.00th=[ 186], 99.50th=[ 211], 99.90th=[ 257], 99.95th=[ 257], 00:13:29.231 | 99.99th=[ 262] 00:13:29.231 bw ( KiB/s): min=97474, max=111616, per=5.32%, avg=105215.25, stdev=3144.77, samples=20 00:13:29.231 iops : min= 380, max= 436, avg=410.95, stdev=12.39, samples=20 00:13:29.231 lat (msec) : 50=0.53%, 100=0.62%, 250=98.68%, 500=0.17% 00:13:29.231 cpu : usr=0.37%, sys=1.78%, ctx=996, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=4173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 job10: (groupid=0, jobs=1): err= 0: pid=85429: Tue Apr 23 02:58:06 2024 00:13:29.231 read: IOPS=628, BW=157MiB/s (165MB/s)(1583MiB/10069msec) 00:13:29.231 slat (usec): min=17, max=37556, avg=1549.40, stdev=3396.09 00:13:29.231 clat (msec): min=29, max=165, avg=100.07, stdev=15.64 00:13:29.231 lat (msec): min=29, max=165, avg=101.62, stdev=15.80 00:13:29.231 clat percentiles (msec): 00:13:29.231 | 1.00th=[ 81], 5.00th=[ 87], 10.00th=[ 89], 20.00th=[ 91], 00:13:29.231 | 30.00th=[ 93], 40.00th=[ 94], 50.00th=[ 96], 60.00th=[ 97], 00:13:29.231 | 70.00th=[ 101], 80.00th=[ 106], 90.00th=[ 125], 95.00th=[ 136], 00:13:29.231 | 99.00th=[ 153], 99.50th=[ 157], 99.90th=[ 163], 99.95th=[ 165], 00:13:29.231 | 99.99th=[ 165] 00:13:29.231 bw ( KiB/s): min=121856, max=178020, per=8.11%, avg=160457.95, stdev=18578.53, samples=20 00:13:29.231 iops : min= 476, max= 695, avg=626.70, stdev=72.53, samples=20 00:13:29.231 lat (msec) : 50=0.52%, 100=68.84%, 250=30.64% 00:13:29.231 cpu : usr=0.35%, sys=2.24%, ctx=1554, majf=0, minf=4097 00:13:29.231 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:13:29.231 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:29.231 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:29.231 issued rwts: total=6331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:29.231 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:29.231 00:13:29.231 Run status group 0 (all jobs): 00:13:29.231 READ: bw=1932MiB/s (2026MB/s), 101MiB/s-441MiB/s (106MB/s-463MB/s), io=19.1GiB (20.5GB), run=10012-10113msec 00:13:29.231 00:13:29.231 Disk stats (read/write): 00:13:29.231 nvme0n1: ios=23495/0, merge=0/0, ticks=1239534/0, in_queue=1239534, util=97.93% 00:13:29.231 nvme10n1: ios=34374/0, merge=0/0, ticks=1212125/0, in_queue=1212125, util=97.98% 00:13:29.231 nvme1n1: ios=8254/0, merge=0/0, ticks=1226595/0, in_queue=1226595, util=98.05% 00:13:29.231 nvme2n1: ios=8197/0, merge=0/0, ticks=1223426/0, in_queue=1223426, util=98.24% 00:13:29.231 nvme3n1: ios=8263/0, merge=0/0, ticks=1226174/0, in_queue=1226174, util=98.39% 00:13:29.231 nvme4n1: ios=17398/0, merge=0/0, ticks=1237459/0, in_queue=1237459, util=98.50% 00:13:29.231 nvme5n1: ios=12739/0, merge=0/0, ticks=1230676/0, in_queue=1230676, util=98.51% 00:13:29.231 nvme6n1: ios=8074/0, merge=0/0, ticks=1225677/0, in_queue=1225677, util=98.60% 00:13:29.231 nvme7n1: ios=12605/0, merge=0/0, ticks=1230440/0, in_queue=1230440, util=98.90% 00:13:29.231 nvme8n1: ios=8235/0, merge=0/0, ticks=1226013/0, in_queue=1226013, util=99.06% 00:13:29.231 nvme9n1: ios=12544/0, merge=0/0, ticks=1232433/0, in_queue=1232433, util=99.07% 00:13:29.231 02:58:06 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:29.231 [global] 00:13:29.231 thread=1 00:13:29.231 invalidate=1 00:13:29.231 rw=randwrite 00:13:29.231 time_based=1 00:13:29.231 runtime=10 00:13:29.231 ioengine=libaio 00:13:29.231 direct=1 00:13:29.231 bs=262144 00:13:29.231 iodepth=64 00:13:29.231 norandommap=1 00:13:29.231 numjobs=1 00:13:29.231 00:13:29.231 [job0] 00:13:29.231 filename=/dev/nvme0n1 00:13:29.231 [job1] 00:13:29.231 filename=/dev/nvme10n1 00:13:29.231 [job2] 00:13:29.231 filename=/dev/nvme1n1 00:13:29.231 [job3] 00:13:29.231 filename=/dev/nvme2n1 00:13:29.231 [job4] 00:13:29.231 filename=/dev/nvme3n1 00:13:29.231 [job5] 00:13:29.231 filename=/dev/nvme4n1 00:13:29.231 [job6] 00:13:29.231 filename=/dev/nvme5n1 00:13:29.231 [job7] 00:13:29.231 filename=/dev/nvme6n1 00:13:29.231 [job8] 00:13:29.231 filename=/dev/nvme7n1 00:13:29.231 [job9] 00:13:29.231 filename=/dev/nvme8n1 00:13:29.231 [job10] 00:13:29.231 filename=/dev/nvme9n1 00:13:29.231 Could not set queue depth (nvme0n1) 00:13:29.231 Could not set queue depth (nvme10n1) 00:13:29.231 Could not set queue depth (nvme1n1) 00:13:29.232 Could not set queue depth (nvme2n1) 00:13:29.232 Could not set queue depth (nvme3n1) 00:13:29.232 Could not set queue depth (nvme4n1) 00:13:29.232 Could not set queue depth (nvme5n1) 00:13:29.232 Could not set queue depth (nvme6n1) 00:13:29.232 Could not set queue depth (nvme7n1) 00:13:29.232 Could not set queue depth (nvme8n1) 00:13:29.232 Could not set queue depth (nvme9n1) 00:13:29.232 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:29.232 fio-3.35 00:13:29.232 Starting 11 threads 00:13:39.212 00:13:39.212 job0: (groupid=0, jobs=1): err= 0: pid=85623: Tue Apr 23 02:58:17 2024 00:13:39.212 write: IOPS=431, BW=108MiB/s (113MB/s)(1093MiB/10131msec); 0 zone resets 00:13:39.212 slat (usec): min=16, max=37152, avg=2244.17, stdev=3972.62 00:13:39.212 clat (msec): min=18, max=254, avg=146.00, stdev=22.38 00:13:39.212 lat (msec): min=18, max=254, avg=148.24, stdev=22.45 00:13:39.212 clat percentiles (msec): 00:13:39.212 | 1.00th=[ 63], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 128], 00:13:39.212 | 30.00th=[ 129], 40.00th=[ 150], 50.00th=[ 155], 60.00th=[ 161], 00:13:39.212 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:13:39.212 | 99.00th=[ 184], 99.50th=[ 213], 99.90th=[ 247], 99.95th=[ 247], 00:13:39.212 | 99.99th=[ 255] 00:13:39.212 bw ( KiB/s): min=100352, max=129536, per=7.61%, avg=110310.40, stdev=12637.25, samples=20 00:13:39.212 iops : min= 392, max= 506, avg=430.90, stdev=49.36, samples=20 00:13:39.212 lat (msec) : 20=0.09%, 50=0.64%, 100=1.51%, 250=97.71%, 500=0.05% 00:13:39.212 cpu : usr=0.81%, sys=1.00%, ctx=5387, majf=0, minf=1 00:13:39.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:39.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.212 issued rwts: total=0,4372,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.212 job1: (groupid=0, jobs=1): err= 0: pid=85624: Tue Apr 23 02:58:17 2024 00:13:39.212 write: IOPS=1111, BW=278MiB/s (291MB/s)(2794MiB/10055msec); 0 zone resets 00:13:39.212 slat (usec): min=16, max=6868, avg=890.63, stdev=1491.86 00:13:39.212 clat (msec): min=9, max=108, avg=56.68, stdev= 3.54 00:13:39.212 lat (msec): min=9, max=111, avg=57.57, stdev= 3.35 00:13:39.212 clat percentiles (msec): 00:13:39.212 | 1.00th=[ 53], 5.00th=[ 54], 10.00th=[ 54], 20.00th=[ 55], 00:13:39.212 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 57], 60.00th=[ 58], 00:13:39.212 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 59], 95.00th=[ 59], 00:13:39.212 | 99.00th=[ 60], 99.50th=[ 61], 99.90th=[ 101], 99.95th=[ 105], 00:13:39.212 | 99.99th=[ 109] 00:13:39.212 bw ( KiB/s): min=279040, max=287744, per=19.63%, avg=284467.20, stdev=2187.42, samples=20 00:13:39.212 iops : min= 1090, max= 1124, avg=1111.20, stdev= 8.54, samples=20 00:13:39.212 lat (msec) : 10=0.04%, 20=0.11%, 50=0.29%, 100=99.45%, 250=0.12% 00:13:39.212 cpu : usr=1.59%, sys=2.28%, ctx=13594, majf=0, minf=1 00:13:39.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:39.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.212 issued rwts: total=0,11175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.212 job2: (groupid=0, jobs=1): err= 0: pid=85637: Tue Apr 23 02:58:17 2024 00:13:39.212 write: IOPS=432, BW=108MiB/s (113MB/s)(1092MiB/10112msec); 0 zone resets 00:13:39.212 slat (usec): min=18, max=30263, avg=2252.44, stdev=3987.93 00:13:39.212 clat (msec): min=32, max=233, avg=145.82, stdev=20.62 00:13:39.212 lat (msec): min=32, max=233, avg=148.07, stdev=20.65 00:13:39.212 clat percentiles (msec): 00:13:39.212 | 1.00th=[ 88], 5.00th=[ 117], 10.00th=[ 122], 20.00th=[ 126], 00:13:39.212 | 30.00th=[ 128], 40.00th=[ 150], 50.00th=[ 155], 60.00th=[ 161], 00:13:39.212 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 163], 95.00th=[ 165], 00:13:39.212 | 99.00th=[ 182], 99.50th=[ 192], 99.90th=[ 226], 99.95th=[ 226], 00:13:39.212 | 99.99th=[ 234] 00:13:39.212 bw ( KiB/s): min=96256, max=131072, per=7.61%, avg=110243.85, stdev=13263.25, samples=20 00:13:39.212 iops : min= 376, max= 512, avg=430.60, stdev=51.83, samples=20 00:13:39.212 lat (msec) : 50=0.37%, 100=1.26%, 250=98.37% 00:13:39.212 cpu : usr=0.68%, sys=1.12%, ctx=4804, majf=0, minf=1 00:13:39.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:13:39.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.212 issued rwts: total=0,4369,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.212 job3: (groupid=0, jobs=1): err= 0: pid=85642: Tue Apr 23 02:58:17 2024 00:13:39.212 write: IOPS=425, BW=106MiB/s (111MB/s)(1076MiB/10129msec); 0 zone resets 00:13:39.212 slat (usec): min=17, max=64920, avg=2318.69, stdev=4105.28 00:13:39.212 clat (msec): min=66, max=252, avg=148.21, stdev=18.71 00:13:39.212 lat (msec): min=66, max=252, avg=150.53, stdev=18.55 00:13:39.212 clat percentiles (msec): 00:13:39.212 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:13:39.212 | 30.00th=[ 130], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:13:39.212 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:13:39.212 | 99.00th=[ 184], 99.50th=[ 211], 99.90th=[ 245], 99.95th=[ 245], 00:13:39.212 | 99.99th=[ 253] 00:13:39.212 bw ( KiB/s): min=90112, max=129024, per=7.50%, avg=108595.20, stdev=13364.65, samples=20 00:13:39.212 iops : min= 352, max= 504, avg=424.20, stdev=52.21, samples=20 00:13:39.212 lat (msec) : 100=0.37%, 250=99.58%, 500=0.05% 00:13:39.212 cpu : usr=0.92%, sys=1.10%, ctx=4330, majf=0, minf=1 00:13:39.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:39.212 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.212 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.212 issued rwts: total=0,4305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.212 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.212 job4: (groupid=0, jobs=1): err= 0: pid=85644: Tue Apr 23 02:58:17 2024 00:13:39.212 write: IOPS=396, BW=99.2MiB/s (104MB/s)(1007MiB/10144msec); 0 zone resets 00:13:39.212 slat (usec): min=20, max=48140, avg=2436.85, stdev=4303.45 00:13:39.212 clat (msec): min=38, max=304, avg=158.75, stdev=17.09 00:13:39.212 lat (msec): min=38, max=304, avg=161.19, stdev=16.91 00:13:39.212 clat percentiles (msec): 00:13:39.212 | 1.00th=[ 75], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:13:39.212 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:39.212 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:13:39.212 | 99.00th=[ 205], 99.50th=[ 255], 99.90th=[ 296], 99.95th=[ 296], 00:13:39.212 | 99.99th=[ 305] 00:13:39.212 bw ( KiB/s): min=98816, max=114688, per=7.00%, avg=101452.80, stdev=3290.06, samples=20 00:13:39.212 iops : min= 386, max= 448, avg=396.30, stdev=12.85, samples=20 00:13:39.212 lat (msec) : 50=0.22%, 100=1.49%, 250=97.74%, 500=0.55% 00:13:39.212 cpu : usr=0.61%, sys=0.98%, ctx=5669, majf=0, minf=1 00:13:39.212 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,4026,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job5: (groupid=0, jobs=1): err= 0: pid=85645: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=394, BW=98.6MiB/s (103MB/s)(1001MiB/10147msec); 0 zone resets 00:13:39.213 slat (usec): min=16, max=17316, avg=2494.11, stdev=4303.40 00:13:39.213 clat (msec): min=22, max=308, avg=159.71, stdev=16.61 00:13:39.213 lat (msec): min=22, max=308, avg=162.21, stdev=16.29 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 94], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:13:39.213 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:39.213 | 70.00th=[ 163], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:13:39.213 | 99.00th=[ 207], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 300], 00:13:39.213 | 99.99th=[ 309] 00:13:39.213 bw ( KiB/s): min=96256, max=104448, per=6.96%, avg=100838.40, stdev=1665.09, samples=20 00:13:39.213 iops : min= 376, max= 408, avg=393.90, stdev= 6.50, samples=20 00:13:39.213 lat (msec) : 50=0.50%, 100=0.60%, 250=98.35%, 500=0.55% 00:13:39.213 cpu : usr=0.66%, sys=1.05%, ctx=3764, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,4002,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job6: (groupid=0, jobs=1): err= 0: pid=85646: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=393, BW=98.4MiB/s (103MB/s)(998MiB/10147msec); 0 zone resets 00:13:39.213 slat (usec): min=18, max=30939, avg=2499.82, stdev=4321.95 00:13:39.213 clat (msec): min=22, max=308, avg=160.11, stdev=16.67 00:13:39.213 lat (msec): min=22, max=308, avg=162.61, stdev=16.34 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 85], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:13:39.213 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:39.213 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:13:39.213 | 99.00th=[ 207], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 309], 00:13:39.213 | 99.99th=[ 309] 00:13:39.213 bw ( KiB/s): min=98304, max=102400, per=6.94%, avg=100582.40, stdev=1270.26, samples=20 00:13:39.213 iops : min= 384, max= 400, avg=392.90, stdev= 4.96, samples=20 00:13:39.213 lat (msec) : 50=0.50%, 100=0.60%, 250=98.35%, 500=0.55% 00:13:39.213 cpu : usr=0.74%, sys=1.10%, ctx=4200, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,3992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job7: (groupid=0, jobs=1): err= 0: pid=85647: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=639, BW=160MiB/s (168MB/s)(1619MiB/10119msec); 0 zone resets 00:13:39.213 slat (usec): min=18, max=45281, avg=1539.31, stdev=2746.33 00:13:39.213 clat (msec): min=14, max=239, avg=98.43, stdev=18.11 00:13:39.213 lat (msec): min=14, max=239, avg=99.97, stdev=18.18 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 70], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 88], 00:13:39.213 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:39.213 | 70.00th=[ 93], 80.00th=[ 120], 90.00th=[ 127], 95.00th=[ 129], 00:13:39.213 | 99.00th=[ 134], 99.50th=[ 176], 99.90th=[ 224], 99.95th=[ 232], 00:13:39.213 | 99.99th=[ 241] 00:13:39.213 bw ( KiB/s): min=126976, max=182784, per=11.33%, avg=164159.85, stdev=23702.44, samples=20 00:13:39.213 iops : min= 496, max= 714, avg=641.20, stdev=92.66, samples=20 00:13:39.213 lat (msec) : 20=0.12%, 50=0.37%, 100=73.50%, 250=26.00% 00:13:39.213 cpu : usr=1.02%, sys=1.94%, ctx=4456, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,6476,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job8: (groupid=0, jobs=1): err= 0: pid=85648: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=390, BW=97.7MiB/s (102MB/s)(991MiB/10145msec); 0 zone resets 00:13:39.213 slat (usec): min=18, max=80409, avg=2517.23, stdev=4475.63 00:13:39.213 clat (msec): min=86, max=308, avg=161.17, stdev=12.75 00:13:39.213 lat (msec): min=86, max=308, avg=163.69, stdev=12.09 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 148], 5.00th=[ 150], 10.00th=[ 153], 20.00th=[ 155], 00:13:39.213 | 30.00th=[ 161], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 163], 00:13:39.213 | 70.00th=[ 163], 80.00th=[ 165], 90.00th=[ 165], 95.00th=[ 167], 00:13:39.213 | 99.00th=[ 220], 99.50th=[ 259], 99.90th=[ 300], 99.95th=[ 309], 00:13:39.213 | 99.99th=[ 309] 00:13:39.213 bw ( KiB/s): min=84136, max=102400, per=6.89%, avg=99874.00, stdev=3893.39, samples=20 00:13:39.213 iops : min= 328, max= 400, avg=390.10, stdev=15.35, samples=20 00:13:39.213 lat (msec) : 100=0.20%, 250=99.24%, 500=0.55% 00:13:39.213 cpu : usr=0.57%, sys=0.94%, ctx=5475, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,3964,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job9: (groupid=0, jobs=1): err= 0: pid=85649: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=638, BW=160MiB/s (167MB/s)(1614MiB/10117msec); 0 zone resets 00:13:39.213 slat (usec): min=18, max=57238, avg=1543.65, stdev=2791.82 00:13:39.213 clat (msec): min=13, max=244, avg=98.73, stdev=17.90 00:13:39.213 lat (msec): min=13, max=244, avg=100.27, stdev=17.96 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 80], 5.00th=[ 85], 10.00th=[ 86], 20.00th=[ 89], 00:13:39.213 | 30.00th=[ 90], 40.00th=[ 91], 50.00th=[ 92], 60.00th=[ 92], 00:13:39.213 | 70.00th=[ 94], 80.00th=[ 120], 90.00th=[ 126], 95.00th=[ 127], 00:13:39.213 | 99.00th=[ 134], 99.50th=[ 180], 99.90th=[ 230], 99.95th=[ 236], 00:13:39.213 | 99.99th=[ 245] 00:13:39.213 bw ( KiB/s): min=127488, max=182272, per=11.29%, avg=163637.60, stdev=22781.70, samples=20 00:13:39.213 iops : min= 498, max= 712, avg=639.15, stdev=89.07, samples=20 00:13:39.213 lat (msec) : 20=0.12%, 50=0.37%, 100=72.35%, 250=27.16% 00:13:39.213 cpu : usr=1.21%, sys=1.92%, ctx=4370, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:13:39.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.213 issued rwts: total=0,6455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.213 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.213 job10: (groupid=0, jobs=1): err= 0: pid=85650: Tue Apr 23 02:58:17 2024 00:13:39.213 write: IOPS=423, BW=106MiB/s (111MB/s)(1073MiB/10124msec); 0 zone resets 00:13:39.213 slat (usec): min=17, max=87425, avg=2324.30, stdev=4203.35 00:13:39.213 clat (msec): min=89, max=254, avg=148.59, stdev=18.86 00:13:39.213 lat (msec): min=89, max=254, avg=150.92, stdev=18.70 00:13:39.213 clat percentiles (msec): 00:13:39.213 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 128], 00:13:39.213 | 30.00th=[ 130], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 161], 00:13:39.213 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 165], 00:13:39.213 | 99.00th=[ 203], 99.50th=[ 224], 99.90th=[ 247], 99.95th=[ 247], 00:13:39.213 | 99.99th=[ 255] 00:13:39.213 bw ( KiB/s): min=84992, max=129536, per=7.47%, avg=108262.40, stdev=13648.20, samples=20 00:13:39.213 iops : min= 332, max= 506, avg=422.90, stdev=53.31, samples=20 00:13:39.213 lat (msec) : 100=0.14%, 250=99.81%, 500=0.05% 00:13:39.213 cpu : usr=0.75%, sys=1.17%, ctx=3644, majf=0, minf=1 00:13:39.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:39.214 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:39.214 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:39.214 issued rwts: total=0,4292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:39.214 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:39.214 00:13:39.214 Run status group 0 (all jobs): 00:13:39.214 WRITE: bw=1415MiB/s (1484MB/s), 97.7MiB/s-278MiB/s (102MB/s-291MB/s), io=14.0GiB (15.1GB), run=10055-10147msec 00:13:39.214 00:13:39.214 Disk stats (read/write): 00:13:39.214 nvme0n1: ios=50/8572, merge=0/0, ticks=74/1210415, in_queue=1210489, util=97.73% 00:13:39.214 nvme10n1: ios=49/22106, merge=0/0, ticks=57/1213430, in_queue=1213487, util=97.87% 00:13:39.214 nvme1n1: ios=45/8563, merge=0/0, ticks=44/1210370, in_queue=1210414, util=97.80% 00:13:39.214 nvme2n1: ios=37/8437, merge=0/0, ticks=54/1210344, in_queue=1210398, util=98.02% 00:13:39.214 nvme3n1: ios=20/7893, merge=0/0, ticks=20/1209097, in_queue=1209117, util=97.82% 00:13:39.214 nvme4n1: ios=0/7848, merge=0/0, ticks=0/1208526, in_queue=1208526, util=98.04% 00:13:39.214 nvme5n1: ios=0/7828, merge=0/0, ticks=0/1208642, in_queue=1208642, util=98.22% 00:13:39.214 nvme6n1: ios=0/12792, merge=0/0, ticks=0/1210419, in_queue=1210419, util=98.42% 00:13:39.214 nvme7n1: ios=0/7774, merge=0/0, ticks=0/1208367, in_queue=1208367, util=98.61% 00:13:39.214 nvme8n1: ios=0/12752, merge=0/0, ticks=0/1210154, in_queue=1210154, util=98.96% 00:13:39.214 nvme9n1: ios=0/8413, merge=0/0, ticks=0/1209281, in_queue=1209281, util=98.90% 00:13:39.214 02:58:17 -- target/multiconnection.sh@36 -- # sync 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK1 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK2 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK3 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK4 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK5 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK6 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.214 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.214 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.214 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:39.214 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:39.214 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:39.214 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK7 00:13:39.214 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.214 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.214 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:39.214 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.214 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.215 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.215 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:39.215 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:39.215 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:39.215 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.215 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.215 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:13:39.215 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.215 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK8 00:13:39.215 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.215 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:39.215 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.215 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.215 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.215 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:39.215 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:39.215 02:58:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:39.215 02:58:17 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.215 02:58:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:13:39.215 02:58:17 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.215 02:58:17 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.215 02:58:17 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK9 00:13:39.215 02:58:17 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.215 02:58:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:39.215 02:58:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.215 02:58:17 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 02:58:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.215 02:58:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.215 02:58:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:39.215 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:39.215 02:58:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:39.215 02:58:18 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.215 02:58:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.215 02:58:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:13:39.215 02:58:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.215 02:58:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK10 00:13:39.215 02:58:18 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.215 02:58:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:39.215 02:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.215 02:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 02:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.215 02:58:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:39.215 02:58:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:39.215 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:39.215 02:58:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:39.215 02:58:18 -- common/autotest_common.sh@1205 -- # local i=0 00:13:39.215 02:58:18 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:13:39.215 02:58:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:13:39.215 02:58:18 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:13:39.215 02:58:18 -- common/autotest_common.sh@1213 -- # grep -q -w SPDK11 00:13:39.215 02:58:18 -- common/autotest_common.sh@1217 -- # return 0 00:13:39.215 02:58:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:39.215 02:58:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:39.215 02:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:39.215 02:58:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:39.215 02:58:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:39.215 02:58:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:39.215 02:58:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:39.215 02:58:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:39.215 02:58:18 -- nvmf/common.sh@117 -- # sync 00:13:39.215 02:58:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:39.215 02:58:18 -- nvmf/common.sh@120 -- # set +e 00:13:39.215 02:58:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:39.215 02:58:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:39.215 rmmod nvme_tcp 00:13:39.215 rmmod nvme_fabrics 00:13:39.215 rmmod nvme_keyring 00:13:39.215 02:58:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:39.215 02:58:18 -- nvmf/common.sh@124 -- # set -e 00:13:39.215 02:58:18 -- nvmf/common.sh@125 -- # return 0 00:13:39.215 02:58:18 -- nvmf/common.sh@478 -- # '[' -n 84961 ']' 00:13:39.215 02:58:18 -- nvmf/common.sh@479 -- # killprocess 84961 00:13:39.215 02:58:18 -- common/autotest_common.sh@936 -- # '[' -z 84961 ']' 00:13:39.215 02:58:18 -- common/autotest_common.sh@940 -- # kill -0 84961 00:13:39.215 02:58:18 -- common/autotest_common.sh@941 -- # uname 00:13:39.215 02:58:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:39.215 02:58:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 84961 00:13:39.215 02:58:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:39.215 02:58:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:39.215 02:58:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 84961' 00:13:39.215 killing process with pid 84961 00:13:39.215 02:58:18 -- common/autotest_common.sh@955 -- # kill 84961 00:13:39.215 02:58:18 -- common/autotest_common.sh@960 -- # wait 84961 00:13:39.485 02:58:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:39.485 02:58:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:39.485 02:58:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:39.485 02:58:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.485 02:58:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:39.485 02:58:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.485 02:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.485 02:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.485 02:58:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:13:39.485 00:13:39.485 real 0m48.664s 00:13:39.485 user 2m38.105s 00:13:39.485 sys 0m35.290s 00:13:39.485 02:58:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:39.485 02:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:39.485 ************************************ 00:13:39.485 END TEST nvmf_multiconnection 00:13:39.485 ************************************ 00:13:39.485 02:58:18 -- nvmf/nvmf.sh@67 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:39.485 02:58:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:39.485 02:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:39.485 02:58:18 -- common/autotest_common.sh@10 -- # set +x 00:13:39.757 ************************************ 00:13:39.757 START TEST nvmf_initiator_timeout 00:13:39.757 ************************************ 00:13:39.757 02:58:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:39.757 * Looking for test storage... 00:13:39.757 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:39.757 02:58:18 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:39.757 02:58:18 -- nvmf/common.sh@7 -- # uname -s 00:13:39.757 02:58:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.757 02:58:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.757 02:58:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.757 02:58:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:39.757 02:58:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:39.757 02:58:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.757 02:58:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:39.757 02:58:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.757 02:58:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:39.757 02:58:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:13:39.757 02:58:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:13:39.757 02:58:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.757 02:58:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:39.757 02:58:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:39.757 02:58:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.757 02:58:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:39.757 02:58:18 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.757 02:58:18 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.757 02:58:18 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.757 02:58:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.757 02:58:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.757 02:58:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.757 02:58:18 -- paths/export.sh@5 -- # export PATH 00:13:39.757 02:58:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.757 02:58:18 -- nvmf/common.sh@47 -- # : 0 00:13:39.757 02:58:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:39.757 02:58:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:39.757 02:58:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:39.757 02:58:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.757 02:58:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.757 02:58:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:39.757 02:58:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:39.757 02:58:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:39.757 02:58:18 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.757 02:58:18 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.757 02:58:18 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:39.757 02:58:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:39.757 02:58:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.757 02:58:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:39.757 02:58:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:39.757 02:58:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:39.757 02:58:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.757 02:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.757 02:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:39.757 02:58:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:13:39.757 02:58:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:13:39.757 02:58:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:39.757 02:58:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:39.757 02:58:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:39.757 02:58:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:13:39.757 02:58:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:39.757 02:58:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:39.757 02:58:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:39.757 02:58:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:39.757 02:58:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:39.757 02:58:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:39.757 02:58:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:39.757 02:58:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:39.757 02:58:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:13:39.757 02:58:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:13:39.757 Cannot find device "nvmf_tgt_br" 00:13:39.757 02:58:18 -- nvmf/common.sh@155 -- # true 00:13:39.757 02:58:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:13:39.757 Cannot find device "nvmf_tgt_br2" 00:13:39.757 02:58:18 -- nvmf/common.sh@156 -- # true 00:13:39.757 02:58:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:13:39.757 02:58:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:13:39.757 Cannot find device "nvmf_tgt_br" 00:13:39.757 02:58:18 -- nvmf/common.sh@158 -- # true 00:13:39.757 02:58:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:13:39.757 Cannot find device "nvmf_tgt_br2" 00:13:39.757 02:58:18 -- nvmf/common.sh@159 -- # true 00:13:39.757 02:58:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:13:39.757 02:58:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:13:39.758 02:58:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:39.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.758 02:58:18 -- nvmf/common.sh@162 -- # true 00:13:39.758 02:58:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:39.758 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:39.758 02:58:18 -- nvmf/common.sh@163 -- # true 00:13:39.758 02:58:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:13:39.758 02:58:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:39.758 02:58:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:40.017 02:58:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:40.017 02:58:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:40.017 02:58:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:40.017 02:58:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:40.017 02:58:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:40.017 02:58:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:40.017 02:58:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:13:40.017 02:58:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:13:40.017 02:58:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:13:40.017 02:58:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:13:40.017 02:58:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:40.017 02:58:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:40.017 02:58:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:40.017 02:58:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:13:40.017 02:58:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:13:40.017 02:58:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:13:40.017 02:58:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:40.017 02:58:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:40.017 02:58:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:40.017 02:58:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:40.017 02:58:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:13:40.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:13:40.017 00:13:40.017 --- 10.0.0.2 ping statistics --- 00:13:40.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.017 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:13:40.017 02:58:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:13:40.017 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:40.017 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:13:40.017 00:13:40.017 --- 10.0.0.3 ping statistics --- 00:13:40.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.017 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:13:40.017 02:58:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:40.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:40.017 00:13:40.017 --- 10.0.0.1 ping statistics --- 00:13:40.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.017 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:40.017 02:58:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.017 02:58:19 -- nvmf/common.sh@422 -- # return 0 00:13:40.017 02:58:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:40.017 02:58:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.017 02:58:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:40.017 02:58:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:40.017 02:58:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.017 02:58:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:40.017 02:58:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:40.017 02:58:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:40.017 02:58:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:40.017 02:58:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:40.017 02:58:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.017 02:58:19 -- nvmf/common.sh@470 -- # nvmfpid=86021 00:13:40.017 02:58:19 -- nvmf/common.sh@471 -- # waitforlisten 86021 00:13:40.017 02:58:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:40.017 02:58:19 -- common/autotest_common.sh@817 -- # '[' -z 86021 ']' 00:13:40.017 02:58:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.017 02:58:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:40.017 02:58:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.017 02:58:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:40.017 02:58:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.017 [2024-04-23 02:58:19.149637] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:13:40.017 [2024-04-23 02:58:19.149729] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.277 [2024-04-23 02:58:19.276211] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:13:40.277 [2024-04-23 02:58:19.290690] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:40.277 [2024-04-23 02:58:19.325193] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.277 [2024-04-23 02:58:19.325483] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.277 [2024-04-23 02:58:19.325640] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:40.277 [2024-04-23 02:58:19.325692] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:40.277 [2024-04-23 02:58:19.325790] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.277 [2024-04-23 02:58:19.325969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.277 [2024-04-23 02:58:19.326451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.277 [2024-04-23 02:58:19.326611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:40.277 [2024-04-23 02:58:19.326614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.214 02:58:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:41.214 02:58:20 -- common/autotest_common.sh@850 -- # return 0 00:13:41.214 02:58:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:41.214 02:58:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:41.214 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.214 02:58:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.214 02:58:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:41.214 02:58:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:41.214 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.214 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.214 Malloc0 00:13:41.214 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.214 02:58:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:41.214 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.214 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.214 Delay0 00:13:41.214 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.215 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.215 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.215 [2024-04-23 02:58:20.150759] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.215 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:41.215 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.215 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.215 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.215 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.215 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.215 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.215 02:58:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:41.215 02:58:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.215 [2024-04-23 02:58:20.182943] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.215 02:58:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:41.215 02:58:20 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:41.215 02:58:20 -- common/autotest_common.sh@1184 -- # local i=0 00:13:41.215 02:58:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:13:41.215 02:58:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:13:41.215 02:58:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:13:43.749 02:58:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:13:43.749 02:58:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:13:43.749 02:58:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.749 02:58:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:13:43.749 02:58:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.749 02:58:22 -- common/autotest_common.sh@1194 -- # return 0 00:13:43.749 02:58:22 -- target/initiator_timeout.sh@35 -- # fio_pid=86085 00:13:43.749 02:58:22 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:43.749 02:58:22 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:43.749 [global] 00:13:43.749 thread=1 00:13:43.749 invalidate=1 00:13:43.749 rw=write 00:13:43.749 time_based=1 00:13:43.749 runtime=60 00:13:43.749 ioengine=libaio 00:13:43.749 direct=1 00:13:43.749 bs=4096 00:13:43.749 iodepth=1 00:13:43.749 norandommap=0 00:13:43.749 numjobs=1 00:13:43.749 00:13:43.749 verify_dump=1 00:13:43.749 verify_backlog=512 00:13:43.749 verify_state_save=0 00:13:43.749 do_verify=1 00:13:43.749 verify=crc32c-intel 00:13:43.749 [job0] 00:13:43.749 filename=/dev/nvme0n1 00:13:43.749 Could not set queue depth (nvme0n1) 00:13:43.749 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:43.749 fio-3.35 00:13:43.749 Starting 1 thread 00:13:46.283 02:58:25 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:46.283 02:58:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.283 02:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:46.283 true 00:13:46.283 02:58:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.283 02:58:25 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:46.283 02:58:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.283 02:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:46.283 true 00:13:46.283 02:58:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.283 02:58:25 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:46.283 02:58:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.283 02:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:46.283 true 00:13:46.283 02:58:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.283 02:58:25 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:46.283 02:58:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:46.283 02:58:25 -- common/autotest_common.sh@10 -- # set +x 00:13:46.283 true 00:13:46.283 02:58:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:46.283 02:58:25 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:49.570 02:58:28 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:49.570 02:58:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.570 02:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:49.570 true 00:13:49.570 02:58:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.570 02:58:28 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:49.570 02:58:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.570 02:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:49.570 true 00:13:49.571 02:58:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.571 02:58:28 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:49.571 02:58:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.571 02:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 true 00:13:49.571 02:58:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.571 02:58:28 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:49.571 02:58:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:49.571 02:58:28 -- common/autotest_common.sh@10 -- # set +x 00:13:49.571 true 00:13:49.571 02:58:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:49.571 02:58:28 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:49.571 02:58:28 -- target/initiator_timeout.sh@54 -- # wait 86085 00:14:45.876 00:14:45.876 job0: (groupid=0, jobs=1): err= 0: pid=86106: Tue Apr 23 02:59:22 2024 00:14:45.876 read: IOPS=785, BW=3140KiB/s (3216kB/s)(184MiB/60000msec) 00:14:45.876 slat (usec): min=10, max=14747, avg=15.70, stdev=75.68 00:14:45.876 clat (usec): min=116, max=40372k, avg=1068.36, stdev=186016.58 00:14:45.876 lat (usec): min=172, max=40372k, avg=1084.06, stdev=186016.59 00:14:45.876 clat percentiles (usec): 00:14:45.876 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 188], 20.00th=[ 194], 00:14:45.876 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:14:45.876 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 239], 95.00th=[ 247], 00:14:45.876 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 347], 00:14:45.876 | 99.99th=[ 881] 00:14:45.876 write: IOPS=788, BW=3152KiB/s (3228kB/s)(185MiB/60000msec); 0 zone resets 00:14:45.876 slat (usec): min=13, max=2475, avg=23.08, stdev=13.50 00:14:45.876 clat (usec): min=4, max=7385, avg=162.12, stdev=48.11 00:14:45.876 lat (usec): min=139, max=7429, avg=185.19, stdev=50.07 00:14:45.876 clat percentiles (usec): 00:14:45.876 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:14:45.876 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:14:45.876 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 186], 95.00th=[ 194], 00:14:45.876 | 99.00th=[ 212], 99.50th=[ 221], 99.90th=[ 255], 99.95th=[ 318], 00:14:45.876 | 99.99th=[ 644] 00:14:45.876 bw ( KiB/s): min= 6080, max=11952, per=100.00%, avg=9451.82, stdev=1362.58, samples=39 00:14:45.876 iops : min= 1520, max= 2988, avg=2362.95, stdev=340.65, samples=39 00:14:45.876 lat (usec) : 10=0.01%, 250=97.92%, 500=2.06%, 750=0.01%, 1000=0.01% 00:14:45.876 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:14:45.876 cpu : usr=0.62%, sys=2.35%, ctx=94406, majf=0, minf=2 00:14:45.876 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.876 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.876 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.876 issued rwts: total=47104,47286,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.876 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.876 00:14:45.876 Run status group 0 (all jobs): 00:14:45.876 READ: bw=3140KiB/s (3216kB/s), 3140KiB/s-3140KiB/s (3216kB/s-3216kB/s), io=184MiB (193MB), run=60000-60000msec 00:14:45.876 WRITE: bw=3152KiB/s (3228kB/s), 3152KiB/s-3152KiB/s (3228kB/s-3228kB/s), io=185MiB (194MB), run=60000-60000msec 00:14:45.876 00:14:45.876 Disk stats (read/write): 00:14:45.876 nvme0n1: ios=47017/47104, merge=0/0, ticks=10602/8559, in_queue=19161, util=99.52% 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.876 02:59:22 -- common/autotest_common.sh@1205 -- # local i=0 00:14:45.876 02:59:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:14:45.876 02:59:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.876 02:59:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:14:45.876 02:59:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.876 nvmf hotplug test: fio successful as expected 00:14:45.876 02:59:22 -- common/autotest_common.sh@1217 -- # return 0 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.876 02:59:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.876 02:59:22 -- common/autotest_common.sh@10 -- # set +x 00:14:45.876 02:59:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:45.876 02:59:22 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:45.876 02:59:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:45.876 02:59:22 -- nvmf/common.sh@117 -- # sync 00:14:45.876 02:59:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.876 02:59:22 -- nvmf/common.sh@120 -- # set +e 00:14:45.876 02:59:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.876 02:59:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.876 rmmod nvme_tcp 00:14:45.876 rmmod nvme_fabrics 00:14:45.876 rmmod nvme_keyring 00:14:45.876 02:59:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.876 02:59:22 -- nvmf/common.sh@124 -- # set -e 00:14:45.876 02:59:22 -- nvmf/common.sh@125 -- # return 0 00:14:45.876 02:59:22 -- nvmf/common.sh@478 -- # '[' -n 86021 ']' 00:14:45.876 02:59:22 -- nvmf/common.sh@479 -- # killprocess 86021 00:14:45.876 02:59:22 -- common/autotest_common.sh@936 -- # '[' -z 86021 ']' 00:14:45.876 02:59:22 -- common/autotest_common.sh@940 -- # kill -0 86021 00:14:45.876 02:59:22 -- common/autotest_common.sh@941 -- # uname 00:14:45.876 02:59:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.876 02:59:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86021 00:14:45.876 killing process with pid 86021 00:14:45.876 02:59:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.876 02:59:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.876 02:59:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86021' 00:14:45.876 02:59:22 -- common/autotest_common.sh@955 -- # kill 86021 00:14:45.876 02:59:22 -- common/autotest_common.sh@960 -- # wait 86021 00:14:45.876 02:59:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:45.876 02:59:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:45.876 02:59:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:45.876 02:59:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.876 02:59:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.876 02:59:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.876 02:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.876 02:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.876 02:59:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.876 00:14:45.876 real 1m4.411s 00:14:45.876 user 3m51.993s 00:14:45.876 sys 0m22.813s 00:14:45.877 02:59:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.877 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.877 ************************************ 00:14:45.877 END TEST nvmf_initiator_timeout 00:14:45.877 ************************************ 00:14:45.877 02:59:23 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:14:45.877 02:59:23 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:14:45.877 02:59:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.877 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.877 02:59:23 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:14:45.877 02:59:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.877 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.877 02:59:23 -- nvmf/nvmf.sh@88 -- # [[ 1 -eq 0 ]] 00:14:45.877 02:59:23 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:45.877 02:59:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.877 02:59:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.877 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.877 ************************************ 00:14:45.877 START TEST nvmf_identify 00:14:45.877 ************************************ 00:14:45.877 02:59:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:45.877 * Looking for test storage... 00:14:45.877 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:45.877 02:59:23 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:45.877 02:59:23 -- nvmf/common.sh@7 -- # uname -s 00:14:45.877 02:59:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:45.877 02:59:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:45.877 02:59:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:45.877 02:59:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:45.877 02:59:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:45.877 02:59:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:45.877 02:59:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:45.877 02:59:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:45.877 02:59:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:45.877 02:59:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:14:45.877 02:59:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:14:45.877 02:59:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:45.877 02:59:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:45.877 02:59:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:45.877 02:59:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:45.877 02:59:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:45.877 02:59:23 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:45.877 02:59:23 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:45.877 02:59:23 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:45.877 02:59:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.877 02:59:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.877 02:59:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.877 02:59:23 -- paths/export.sh@5 -- # export PATH 00:14:45.877 02:59:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:45.877 02:59:23 -- nvmf/common.sh@47 -- # : 0 00:14:45.877 02:59:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:45.877 02:59:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:45.877 02:59:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:45.877 02:59:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:45.877 02:59:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:45.877 02:59:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:45.877 02:59:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:45.877 02:59:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:45.877 02:59:23 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:45.877 02:59:23 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:45.877 02:59:23 -- host/identify.sh@14 -- # nvmftestinit 00:14:45.877 02:59:23 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:45.877 02:59:23 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:45.877 02:59:23 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:45.877 02:59:23 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:45.877 02:59:23 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:45.877 02:59:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.877 02:59:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.877 02:59:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.877 02:59:23 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:45.877 02:59:23 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:45.877 02:59:23 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.877 02:59:23 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.877 02:59:23 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:45.877 02:59:23 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:45.877 02:59:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:45.877 02:59:23 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:45.877 02:59:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:45.877 02:59:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.877 02:59:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:45.877 02:59:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:45.877 02:59:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:45.877 02:59:23 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:45.877 02:59:23 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:45.877 02:59:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:45.877 Cannot find device "nvmf_tgt_br" 00:14:45.877 02:59:23 -- nvmf/common.sh@155 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.877 Cannot find device "nvmf_tgt_br2" 00:14:45.877 02:59:23 -- nvmf/common.sh@156 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:45.877 02:59:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:45.877 Cannot find device "nvmf_tgt_br" 00:14:45.877 02:59:23 -- nvmf/common.sh@158 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:45.877 Cannot find device "nvmf_tgt_br2" 00:14:45.877 02:59:23 -- nvmf/common.sh@159 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:45.877 02:59:23 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:45.877 02:59:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.877 02:59:23 -- nvmf/common.sh@162 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.877 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:45.877 02:59:23 -- nvmf/common.sh@163 -- # true 00:14:45.877 02:59:23 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:45.877 02:59:23 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:45.877 02:59:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:45.877 02:59:23 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:45.877 02:59:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:45.877 02:59:23 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:45.877 02:59:23 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:45.877 02:59:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:45.877 02:59:23 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:45.877 02:59:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:45.877 02:59:23 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:45.877 02:59:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:45.877 02:59:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:45.877 02:59:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:45.877 02:59:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:45.877 02:59:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:45.877 02:59:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:45.877 02:59:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:45.877 02:59:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:45.877 02:59:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:45.877 02:59:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:45.877 02:59:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:45.877 02:59:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:45.877 02:59:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:45.878 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.878 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:14:45.878 00:14:45.878 --- 10.0.0.2 ping statistics --- 00:14:45.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.878 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:45.878 02:59:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:45.878 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:45.878 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:14:45.878 00:14:45.878 --- 10.0.0.3 ping statistics --- 00:14:45.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.878 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:14:45.878 02:59:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:45.878 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.878 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:45.878 00:14:45.878 --- 10.0.0.1 ping statistics --- 00:14:45.878 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.878 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:45.878 02:59:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.878 02:59:23 -- nvmf/common.sh@422 -- # return 0 00:14:45.878 02:59:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:45.878 02:59:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.878 02:59:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:45.878 02:59:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:45.878 02:59:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.878 02:59:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:45.878 02:59:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:45.878 02:59:23 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:45.878 02:59:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:45.878 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 02:59:23 -- host/identify.sh@19 -- # nvmfpid=86948 00:14:45.878 02:59:23 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:45.878 02:59:23 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:45.878 02:59:23 -- host/identify.sh@23 -- # waitforlisten 86948 00:14:45.878 02:59:23 -- common/autotest_common.sh@817 -- # '[' -z 86948 ']' 00:14:45.878 02:59:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.878 02:59:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:45.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.878 02:59:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.878 02:59:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:45.878 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 [2024-04-23 02:59:23.718712] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:14:45.878 [2024-04-23 02:59:23.718795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.878 [2024-04-23 02:59:23.836918] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.878 [2024-04-23 02:59:23.854823] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:45.878 [2024-04-23 02:59:23.890057] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:45.878 [2024-04-23 02:59:23.890373] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:45.878 [2024-04-23 02:59:23.890601] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:45.878 [2024-04-23 02:59:23.890831] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:45.878 [2024-04-23 02:59:23.890960] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:45.878 [2024-04-23 02:59:23.891227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:45.878 [2024-04-23 02:59:23.891323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:45.878 [2024-04-23 02:59:23.891417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:45.878 [2024-04-23 02:59:23.891419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.878 02:59:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:45.878 02:59:23 -- common/autotest_common.sh@850 -- # return 0 00:14:45.878 02:59:23 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.878 02:59:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 [2024-04-23 02:59:23.975549] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.878 02:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:23 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:45.878 02:59:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:45.878 02:59:23 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 02:59:24 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 Malloc0 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 [2024-04-23 02:59:24.082526] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:45.878 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.878 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.878 [2024-04-23 02:59:24.102258] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:45.878 [ 00:14:45.878 { 00:14:45.878 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:45.878 "subtype": "Discovery", 00:14:45.878 "listen_addresses": [ 00:14:45.878 { 00:14:45.878 "transport": "TCP", 00:14:45.878 "trtype": "TCP", 00:14:45.878 "adrfam": "IPv4", 00:14:45.878 "traddr": "10.0.0.2", 00:14:45.878 "trsvcid": "4420" 00:14:45.878 } 00:14:45.878 ], 00:14:45.878 "allow_any_host": true, 00:14:45.878 "hosts": [] 00:14:45.878 }, 00:14:45.878 { 00:14:45.878 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:45.878 "subtype": "NVMe", 00:14:45.878 "listen_addresses": [ 00:14:45.878 { 00:14:45.878 "transport": "TCP", 00:14:45.878 "trtype": "TCP", 00:14:45.878 "adrfam": "IPv4", 00:14:45.878 "traddr": "10.0.0.2", 00:14:45.878 "trsvcid": "4420" 00:14:45.878 } 00:14:45.878 ], 00:14:45.878 "allow_any_host": true, 00:14:45.878 "hosts": [], 00:14:45.878 "serial_number": "SPDK00000000000001", 00:14:45.878 "model_number": "SPDK bdev Controller", 00:14:45.878 "max_namespaces": 32, 00:14:45.878 "min_cntlid": 1, 00:14:45.878 "max_cntlid": 65519, 00:14:45.878 "namespaces": [ 00:14:45.878 { 00:14:45.878 "nsid": 1, 00:14:45.878 "bdev_name": "Malloc0", 00:14:45.878 "name": "Malloc0", 00:14:45.878 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:45.878 "eui64": "ABCDEF0123456789", 00:14:45.878 "uuid": "7e655f70-862e-4938-be89-7fe4a480c8ea" 00:14:45.878 } 00:14:45.878 ] 00:14:45.878 } 00:14:45.878 ] 00:14:45.878 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.878 02:59:24 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:45.878 [2024-04-23 02:59:24.143375] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:14:45.878 [2024-04-23 02:59:24.143447] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86976 ] 00:14:45.878 [2024-04-23 02:59:24.263785] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.878 [2024-04-23 02:59:24.284867] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:45.878 [2024-04-23 02:59:24.284962] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:45.878 [2024-04-23 02:59:24.284970] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:45.878 [2024-04-23 02:59:24.284984] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:45.878 [2024-04-23 02:59:24.284997] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:45.878 [2024-04-23 02:59:24.285169] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:45.878 [2024-04-23 02:59:24.285237] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a0a600 0 00:14:45.878 [2024-04-23 02:59:24.297209] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:45.878 [2024-04-23 02:59:24.297249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:45.878 [2024-04-23 02:59:24.297271] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:45.878 [2024-04-23 02:59:24.297275] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:45.879 [2024-04-23 02:59:24.297323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.297330] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.297334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.297350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:45.879 [2024-04-23 02:59:24.297384] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.304250] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.304270] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.304291] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.304312] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:45.879 [2024-04-23 02:59:24.304322] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:45.879 [2024-04-23 02:59:24.304328] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:45.879 [2024-04-23 02:59:24.304345] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304351] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.304364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.304406] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.304464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.304471] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.304475] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304479] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.304489] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:45.879 [2024-04-23 02:59:24.304497] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:45.879 [2024-04-23 02:59:24.304504] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304508] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304512] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.304519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.304570] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.304632] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.304639] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.304643] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304647] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.304654] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:45.879 [2024-04-23 02:59:24.304663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.304671] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304676] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.304687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.304704] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.304764] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.304771] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.304775] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304780] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.304787] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.304798] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304803] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304807] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.304815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.304833] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.304877] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.304884] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.304888] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.304893] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.304899] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:45.879 [2024-04-23 02:59:24.304905] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.304913] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.305019] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:45.879 [2024-04-23 02:59:24.305025] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.305034] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305039] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305043] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.305062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.305080] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.305145] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.305152] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.305156] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305160] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.305167] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:45.879 [2024-04-23 02:59:24.305177] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305186] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.305194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.305211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.305279] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.305288] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.305292] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305296] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.879 [2024-04-23 02:59:24.305302] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:45.879 [2024-04-23 02:59:24.305307] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:45.879 [2024-04-23 02:59:24.305316] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:45.879 [2024-04-23 02:59:24.305326] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:45.879 [2024-04-23 02:59:24.305336] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.879 [2024-04-23 02:59:24.305349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.879 [2024-04-23 02:59:24.305370] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.879 [2024-04-23 02:59:24.305472] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.879 [2024-04-23 02:59:24.305480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.879 [2024-04-23 02:59:24.305484] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305488] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0a600): datao=0, datal=4096, cccid=0 00:14:45.879 [2024-04-23 02:59:24.305493] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a51390) on tqpair(0x1a0a600): expected_datao=0, payload_size=4096 00:14:45.879 [2024-04-23 02:59:24.305498] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305506] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305511] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.879 [2024-04-23 02:59:24.305526] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.879 [2024-04-23 02:59:24.305530] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.879 [2024-04-23 02:59:24.305534] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.305544] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:45.880 [2024-04-23 02:59:24.305549] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:45.880 [2024-04-23 02:59:24.305554] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:45.880 [2024-04-23 02:59:24.305563] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:45.880 [2024-04-23 02:59:24.305569] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:45.880 [2024-04-23 02:59:24.305574] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:45.880 [2024-04-23 02:59:24.305583] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:45.880 [2024-04-23 02:59:24.305591] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305595] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305600] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305608] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.880 [2024-04-23 02:59:24.305627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.880 [2024-04-23 02:59:24.305686] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.880 [2024-04-23 02:59:24.305693] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.880 [2024-04-23 02:59:24.305697] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305701] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51390) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.305710] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305714] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.880 [2024-04-23 02:59:24.305742] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305746] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305750] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.880 [2024-04-23 02:59:24.305763] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305767] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.880 [2024-04-23 02:59:24.305783] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305787] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305791] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.880 [2024-04-23 02:59:24.305802] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:45.880 [2024-04-23 02:59:24.305815] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:45.880 [2024-04-23 02:59:24.305822] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305827] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.305834] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.880 [2024-04-23 02:59:24.305854] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51390, cid 0, qid 0 00:14:45.880 [2024-04-23 02:59:24.305861] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a514f0, cid 1, qid 0 00:14:45.880 [2024-04-23 02:59:24.305866] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51650, cid 2, qid 0 00:14:45.880 [2024-04-23 02:59:24.305871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.880 [2024-04-23 02:59:24.305876] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51910, cid 4, qid 0 00:14:45.880 [2024-04-23 02:59:24.305972] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.880 [2024-04-23 02:59:24.305979] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.880 [2024-04-23 02:59:24.305983] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.305987] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51910) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.305994] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:45.880 [2024-04-23 02:59:24.305999] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:45.880 [2024-04-23 02:59:24.306011] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306016] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.306023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.880 [2024-04-23 02:59:24.306041] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51910, cid 4, qid 0 00:14:45.880 [2024-04-23 02:59:24.306114] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.880 [2024-04-23 02:59:24.306121] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.880 [2024-04-23 02:59:24.306125] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306129] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0a600): datao=0, datal=4096, cccid=4 00:14:45.880 [2024-04-23 02:59:24.306134] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a51910) on tqpair(0x1a0a600): expected_datao=0, payload_size=4096 00:14:45.880 [2024-04-23 02:59:24.306138] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306146] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306150] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306171] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.880 [2024-04-23 02:59:24.306178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.880 [2024-04-23 02:59:24.306182] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51910) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.306201] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:45.880 [2024-04-23 02:59:24.306225] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.306237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.880 [2024-04-23 02:59:24.306245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306249] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306253] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.306259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.880 [2024-04-23 02:59:24.306287] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51910, cid 4, qid 0 00:14:45.880 [2024-04-23 02:59:24.306295] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51a70, cid 5, qid 0 00:14:45.880 [2024-04-23 02:59:24.306398] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.880 [2024-04-23 02:59:24.306405] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.880 [2024-04-23 02:59:24.306409] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306413] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0a600): datao=0, datal=1024, cccid=4 00:14:45.880 [2024-04-23 02:59:24.306418] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a51910) on tqpair(0x1a0a600): expected_datao=0, payload_size=1024 00:14:45.880 [2024-04-23 02:59:24.306422] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306429] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306433] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.880 [2024-04-23 02:59:24.306445] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.880 [2024-04-23 02:59:24.306449] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306453] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51a70) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.306472] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.880 [2024-04-23 02:59:24.306479] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.880 [2024-04-23 02:59:24.306483] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306487] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51910) on tqpair=0x1a0a600 00:14:45.880 [2024-04-23 02:59:24.306501] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306506] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0a600) 00:14:45.880 [2024-04-23 02:59:24.306513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.880 [2024-04-23 02:59:24.306536] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51910, cid 4, qid 0 00:14:45.880 [2024-04-23 02:59:24.306600] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.880 [2024-04-23 02:59:24.306607] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.880 [2024-04-23 02:59:24.306611] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.880 [2024-04-23 02:59:24.306615] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0a600): datao=0, datal=3072, cccid=4 00:14:45.881 [2024-04-23 02:59:24.306620] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a51910) on tqpair(0x1a0a600): expected_datao=0, payload_size=3072 00:14:45.881 [2024-04-23 02:59:24.306624] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306631] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306636] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306644] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.881 [2024-04-23 02:59:24.306666] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.881 [2024-04-23 02:59:24.306670] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306674] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51910) on tqpair=0x1a0a600 00:14:45.881 [2024-04-23 02:59:24.306685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a0a600) 00:14:45.881 [2024-04-23 02:59:24.306697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.881 [2024-04-23 02:59:24.306720] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a51910, cid 4, qid 0 00:14:45.881 [2024-04-23 02:59:24.306782] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.881 [2024-04-23 02:59:24.306789] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.881 [2024-04-23 02:59:24.306794] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306797] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a0a600): datao=0, datal=8, cccid=4 00:14:45.881 [2024-04-23 02:59:24.306803] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a51910) on tqpair(0x1a0a600): expected_datao=0, payload_size=8 00:14:45.881 [2024-04-23 02:59:24.306807] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.881 [2024-04-23 02:59:24.306814] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.881 ===================================================== 00:14:45.881 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:45.881 ===================================================== 00:14:45.881 Controller Capabilities/Features 00:14:45.881 ================================ 00:14:45.881 Vendor ID: 0000 00:14:45.881 Subsystem Vendor ID: 0000 00:14:45.881 Serial Number: .................... 00:14:45.881 Model Number: ........................................ 00:14:45.881 Firmware Version: 24.05 00:14:45.881 Recommended Arb Burst: 0 00:14:45.881 IEEE OUI Identifier: 00 00 00 00:14:45.881 Multi-path I/O 00:14:45.881 May have multiple subsystem ports: No 00:14:45.881 May have multiple controllers: No 00:14:45.881 Associated with SR-IOV VF: No 00:14:45.881 Max Data Transfer Size: 131072 00:14:45.881 Max Number of Namespaces: 0 00:14:45.881 Max Number of I/O Queues: 1024 00:14:45.881 NVMe Specification Version (VS): 1.3 00:14:45.881 NVMe Specification Version (Identify): 1.3 00:14:45.881 Maximum Queue Entries: 128 00:14:45.881 Contiguous Queues Required: Yes 00:14:45.881 Arbitration Mechanisms Supported 00:14:45.881 Weighted Round Robin: Not Supported 00:14:45.881 Vendor Specific: Not Supported 00:14:45.881 Reset Timeout: 15000 ms 00:14:45.881 Doorbell Stride: 4 bytes 00:14:45.881 NVM Subsystem Reset: Not Supported 00:14:45.881 Command Sets Supported 00:14:45.881 NVM Command Set: Supported 00:14:45.881 Boot Partition: Not Supported 00:14:45.881 Memory Page Size Minimum: 4096 bytes 00:14:45.881 Memory Page Size Maximum: 4096 bytes 00:14:45.881 Persistent Memory Region: Not Supported 00:14:45.881 Optional Asynchronous Events Supported 00:14:45.881 Namespace Attribute Notices: Not Supported 00:14:45.881 Firmware Activation Notices: Not Supported 00:14:45.881 ANA Change Notices: Not Supported 00:14:45.881 PLE Aggregate Log Change Notices: Not Supported 00:14:45.881 LBA Status Info Alert Notices: Not Supported 00:14:45.881 EGE Aggregate Log Change Notices: Not Supported 00:14:45.881 Normal NVM Subsystem Shutdown event: Not Supported 00:14:45.881 Zone Descriptor Change Notices: Not Supported 00:14:45.881 Discovery Log Change Notices: Supported 00:14:45.881 Controller Attributes 00:14:45.881 128-bit Host Identifier: Not Supported 00:14:45.881 Non-Operational Permissive Mode: Not Supported 00:14:45.881 NVM Sets: Not Supported 00:14:45.881 Read Recovery Levels: Not Supported 00:14:45.881 Endurance Groups: Not Supported 00:14:45.881 Predictable Latency Mode: Not Supported 00:14:45.881 Traffic Based Keep ALive: Not Supported 00:14:45.881 Namespace Granularity: Not Supported 00:14:45.881 SQ Associations: Not Supported 00:14:45.881 UUID List: Not Supported 00:14:45.881 Multi-Domain Subsystem: Not Supported 00:14:45.881 Fixed Capacity Management: Not Supported 00:14:45.881 Variable Capacity Management: Not Supported 00:14:45.881 Delete Endurance Group: Not Supported 00:14:45.881 Delete NVM Set: Not Supported 00:14:45.881 Extended LBA Formats Supported: Not Supported 00:14:45.881 Flexible Data Placement Supported: Not Supported 00:14:45.881 00:14:45.881 Controller Memory Buffer Support 00:14:45.881 ================================ 00:14:45.881 Supported: No 00:14:45.881 00:14:45.881 Persistent Memory Region Support 00:14:45.881 ================================ 00:14:45.881 Supported: No 00:14:45.881 00:14:45.881 Admin Command Set Attributes 00:14:45.881 ============================ 00:14:45.881 Security Send/Receive: Not Supported 00:14:45.881 Format NVM: Not Supported 00:14:45.881 Firmware Activate/Download: Not Supported 00:14:45.881 Namespace Management: Not Supported 00:14:45.881 Device Self-Test: Not Supported 00:14:45.881 Directives: Not Supported 00:14:45.881 NVMe-MI: Not Supported 00:14:45.881 Virtualization Management: Not Supported 00:14:45.881 Doorbell Buffer Config: Not Supported 00:14:45.881 Get LBA Status Capability: Not Supported 00:14:45.881 Command & Feature Lockdown Capability: Not Supported 00:14:45.881 Abort Command Limit: 1 00:14:45.881 Async Event Request Limit: 4 00:14:45.881 Number of Firmware Slots: N/A 00:14:45.881 Firmware Slot 1 Read-Only: N/A 00:14:45.881 Firmware Activation Without Reset: N/A 00:14:45.881 Multiple Update Detection Support: N/A 00:14:45.881 Firmware Update Granularity: No Information Provided 00:14:45.881 Per-Namespace SMART Log: No 00:14:45.881 Asymmetric Namespace Access Log Page: Not Supported 00:14:45.881 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:45.881 Command Effects Log Page: Not Supported 00:14:45.881 Get Log Page Extended Data: Supported 00:14:45.881 Telemetry Log Pages: Not Supported 00:14:45.881 Persistent Event Log Pages: Not Supported 00:14:45.881 Supported Log Pages Log Page: May Support 00:14:45.881 Commands Supported & Effects Log Page: Not Supported 00:14:45.881 Feature Identifiers & Effects Log Page:May Support 00:14:45.881 NVMe-MI Commands & Effects Log Page: May Support 00:14:45.881 Data Area 4 for Telemetry Log: Not Supported 00:14:45.881 Error Log Page Entries Supported: 128 00:14:45.881 Keep Alive: Not Supported 00:14:45.881 00:14:45.881 NVM Command Set Attributes 00:14:45.881 ========================== 00:14:45.881 Submission Queue Entry Size 00:14:45.881 Max: 1 00:14:45.881 Min: 1 00:14:45.881 Completion Queue Entry Size 00:14:45.881 Max: 1 00:14:45.881 Min: 1 00:14:45.881 Number of Namespaces: 0 00:14:45.881 Compare Command: Not Supported 00:14:45.881 Write Uncorrectable Command: Not Supported 00:14:45.881 Dataset Management Command: Not Supported 00:14:45.881 Write Zeroes Command: Not Supported 00:14:45.881 Set Features Save Field: Not Supported 00:14:45.881 Reservations: Not Supported 00:14:45.881 Timestamp: Not Supported 00:14:45.881 Copy: Not Supported 00:14:45.881 Volatile Write Cache: Not Present 00:14:45.881 Atomic Write Unit (Normal): 1 00:14:45.881 Atomic Write Unit (PFail): 1 00:14:45.882 Atomic Compare & Write Unit: 1 00:14:45.882 Fused Compare & Write: Supported 00:14:45.882 Scatter-Gather List 00:14:45.882 SGL Command Set: Supported 00:14:45.882 SGL Keyed: Supported 00:14:45.882 SGL Bit Bucket Descriptor: Not Supported 00:14:45.882 SGL Metadata Pointer: Not Supported 00:14:45.882 Oversized SGL: Not Supported 00:14:45.882 SGL Metadata Address: Not Supported 00:14:45.882 SGL Offset: Supported 00:14:45.882 Transport SGL Data Block: Not Supported 00:14:45.882 Replay Protected Memory Block: Not Supported 00:14:45.882 00:14:45.882 Firmware Slot Information 00:14:45.882 ========================= 00:14:45.882 Active slot: 0 00:14:45.882 00:14:45.882 00:14:45.882 Error Log 00:14:45.882 ========= 00:14:45.882 00:14:45.882 Active Namespaces 00:14:45.882 ================= 00:14:45.882 Discovery Log Page 00:14:45.882 ================== 00:14:45.882 Generation Counter: 2 00:14:45.882 Number of Records: 2 00:14:45.882 Record Format: 0 00:14:45.882 00:14:45.882 Discovery Log Entry 0 00:14:45.882 ---------------------- 00:14:45.882 Transport Type: 3 (TCP) 00:14:45.882 Address Family: 1 (IPv4) 00:14:45.882 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:45.882 Entry Flags: 00:14:45.882 Duplicate Returned Information: 1 00:14:45.882 Explicit Persistent Connection Support for Discovery: 1 00:14:45.882 Transport Requirements: 00:14:45.882 Secure Channel: Not Required 00:14:45.882 Port ID: 0 (0x0000) 00:14:45.882 Controller ID: 65535 (0xffff) 00:14:45.882 Admin Max SQ Size: 128 00:14:45.882 Transport Service Identifier: 4420 00:14:45.882 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:45.882 Transport Address: 10.0.0.2 00:14:45.882 Discovery Log Entry 1 00:14:45.882 ---------------------- 00:14:45.882 Transport Type: 3 (TCP) 00:14:45.882 Address Family: 1 (IPv4) 00:14:45.882 Subsystem Type: 2 (NVM Subsystem) 00:14:45.882 Entry Flags: 00:14:45.882 Duplicate Returned Information: 0 00:14:45.882 Explicit Persistent Connection Support for Discovery: 0 00:14:45.882 Transport Requirements: 00:14:45.882 Secure Channel: Not Required 00:14:45.882 Port ID: 0 (0x0000) 00:14:45.882 Controller ID: 65535 (0xffff) 00:14:45.882 Admin Max SQ Size: 128 00:14:45.882 Transport Service Identifier: 4420 00:14:45.882 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:45.882 Transport Address: 10.0.0.2 [2024-04-23 02:59:24.306819] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.306834] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.306841] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.306845] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.306849] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a51910) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.306964] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:45.882 [2024-04-23 02:59:24.306981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.882 [2024-04-23 02:59:24.306989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.882 [2024-04-23 02:59:24.306996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.882 [2024-04-23 02:59:24.307002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.882 [2024-04-23 02:59:24.307012] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307029] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307112] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307123] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307170] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307177] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307181] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307214] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307282] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307289] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307293] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307297] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307303] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:45.882 [2024-04-23 02:59:24.307308] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:45.882 [2024-04-23 02:59:24.307319] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307324] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307328] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307335] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307353] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307408] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307412] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307416] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307438] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307460] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307464] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307491] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307542] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307549] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307553] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307557] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307570] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307574] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307579] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307604] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307660] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307664] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307669] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307681] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307686] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307690] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307715] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307785] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307808] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307812] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307816] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.882 [2024-04-23 02:59:24.307828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307832] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.882 [2024-04-23 02:59:24.307836] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.882 [2024-04-23 02:59:24.307844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.882 [2024-04-23 02:59:24.307861] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.882 [2024-04-23 02:59:24.307905] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.882 [2024-04-23 02:59:24.307912] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.882 [2024-04-23 02:59:24.307916] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.307920] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.883 [2024-04-23 02:59:24.307931] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.307936] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.307940] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.883 [2024-04-23 02:59:24.307948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.307964] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.883 [2024-04-23 02:59:24.308021] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.308028] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.308032] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308036] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.883 [2024-04-23 02:59:24.308048] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308053] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308057] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.883 [2024-04-23 02:59:24.308064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.308082] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.883 [2024-04-23 02:59:24.308127] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.308135] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.308138] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308143] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.883 [2024-04-23 02:59:24.308169] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308174] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.308178] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.883 [2024-04-23 02:59:24.308186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.308203] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.883 [2024-04-23 02:59:24.312235] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.312255] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.312276] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.312281] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.883 [2024-04-23 02:59:24.312296] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.312301] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.312305] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a0a600) 00:14:45.883 [2024-04-23 02:59:24.312314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.312338] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a517b0, cid 3, qid 0 00:14:45.883 [2024-04-23 02:59:24.312403] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.312410] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.312414] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.312418] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1a517b0) on tqpair=0x1a0a600 00:14:45.883 [2024-04-23 02:59:24.312427] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:45.883 00:14:45.883 02:59:24 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:45.883 [2024-04-23 02:59:24.349241] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:14:45.883 [2024-04-23 02:59:24.349300] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86983 ] 00:14:45.883 [2024-04-23 02:59:24.470602] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:45.883 [2024-04-23 02:59:24.492466] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:45.883 [2024-04-23 02:59:24.492538] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:45.883 [2024-04-23 02:59:24.492545] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:45.883 [2024-04-23 02:59:24.492557] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:45.883 [2024-04-23 02:59:24.492568] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:45.883 [2024-04-23 02:59:24.492694] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:45.883 [2024-04-23 02:59:24.492774] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x84e600 0 00:14:45.883 [2024-04-23 02:59:24.499544] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:45.883 [2024-04-23 02:59:24.499570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:45.883 [2024-04-23 02:59:24.499577] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:45.883 [2024-04-23 02:59:24.499581] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:45.883 [2024-04-23 02:59:24.499623] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.499630] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.499635] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.883 [2024-04-23 02:59:24.499649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:45.883 [2024-04-23 02:59:24.499690] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.883 [2024-04-23 02:59:24.506264] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.506286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.506308] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506313] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.883 [2024-04-23 02:59:24.506327] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:45.883 [2024-04-23 02:59:24.506336] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:45.883 [2024-04-23 02:59:24.506342] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:45.883 [2024-04-23 02:59:24.506360] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506365] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506370] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.883 [2024-04-23 02:59:24.506380] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.506409] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.883 [2024-04-23 02:59:24.506468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.506476] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.506480] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506484] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.883 [2024-04-23 02:59:24.506494] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:45.883 [2024-04-23 02:59:24.506504] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:45.883 [2024-04-23 02:59:24.506512] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506516] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506520] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.883 [2024-04-23 02:59:24.506528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.506548] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.883 [2024-04-23 02:59:24.506704] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.506711] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.506715] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506719] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.883 [2024-04-23 02:59:24.506726] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:45.883 [2024-04-23 02:59:24.506751] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:45.883 [2024-04-23 02:59:24.506759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.506768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.883 [2024-04-23 02:59:24.506787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.883 [2024-04-23 02:59:24.506806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.883 [2024-04-23 02:59:24.507236] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.883 [2024-04-23 02:59:24.507266] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.883 [2024-04-23 02:59:24.507271] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.507291] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.883 [2024-04-23 02:59:24.507298] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:45.883 [2024-04-23 02:59:24.507324] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.507329] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.883 [2024-04-23 02:59:24.507333] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.507341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.884 [2024-04-23 02:59:24.507362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.507415] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.507422] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.507426] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.507440] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.507462] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:45.884 [2024-04-23 02:59:24.507468] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:45.884 [2024-04-23 02:59:24.507477] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:45.884 [2024-04-23 02:59:24.507583] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:45.884 [2024-04-23 02:59:24.507587] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:45.884 [2024-04-23 02:59:24.507597] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.507602] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.507606] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.507614] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.884 [2024-04-23 02:59:24.507634] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.508201] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.508217] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.508222] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508227] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.508233] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:45.884 [2024-04-23 02:59:24.508245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508250] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508254] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.508262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.884 [2024-04-23 02:59:24.508284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.508331] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.508338] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.508342] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508346] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.508352] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:45.884 [2024-04-23 02:59:24.508358] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.508367] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:45.884 [2024-04-23 02:59:24.508377] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.508388] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508393] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.508401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.884 [2024-04-23 02:59:24.508421] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.508907] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.884 [2024-04-23 02:59:24.508923] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.884 [2024-04-23 02:59:24.508928] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508933] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=4096, cccid=0 00:14:45.884 [2024-04-23 02:59:24.508938] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895390) on tqpair(0x84e600): expected_datao=0, payload_size=4096 00:14:45.884 [2024-04-23 02:59:24.508944] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508952] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508957] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508966] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.508973] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.508977] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.508981] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.508990] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:45.884 [2024-04-23 02:59:24.508996] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:45.884 [2024-04-23 02:59:24.509001] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:45.884 [2024-04-23 02:59:24.509020] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:45.884 [2024-04-23 02:59:24.509026] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:45.884 [2024-04-23 02:59:24.509031] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.509042] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.509050] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509055] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509059] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.884 [2024-04-23 02:59:24.509089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.509276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.509286] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.509290] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509294] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895390) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.509303] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509307] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509312] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.884 [2024-04-23 02:59:24.509326] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509330] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509334] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.884 [2024-04-23 02:59:24.509347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509351] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509355] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.884 [2024-04-23 02:59:24.509368] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509372] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509376] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.884 [2024-04-23 02:59:24.509388] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.509402] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:45.884 [2024-04-23 02:59:24.509410] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509414] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.884 [2024-04-23 02:59:24.509421] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.884 [2024-04-23 02:59:24.509445] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895390, cid 0, qid 0 00:14:45.884 [2024-04-23 02:59:24.509453] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8954f0, cid 1, qid 0 00:14:45.884 [2024-04-23 02:59:24.509458] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895650, cid 2, qid 0 00:14:45.884 [2024-04-23 02:59:24.509463] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.884 [2024-04-23 02:59:24.509468] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.884 [2024-04-23 02:59:24.509903] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.884 [2024-04-23 02:59:24.509919] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.884 [2024-04-23 02:59:24.509924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.884 [2024-04-23 02:59:24.509929] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.884 [2024-04-23 02:59:24.509935] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:45.884 [2024-04-23 02:59:24.509941] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.509951] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.509958] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.509965] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.509970] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.509974] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.509982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:45.885 [2024-04-23 02:59:24.510002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.885 [2024-04-23 02:59:24.510060] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.885 [2024-04-23 02:59:24.510067] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.885 [2024-04-23 02:59:24.510071] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.510075] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.885 [2024-04-23 02:59:24.517237] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.517280] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.517298] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517318] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.517328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.885 [2024-04-23 02:59:24.517354] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.885 [2024-04-23 02:59:24.517429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.885 [2024-04-23 02:59:24.517437] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.885 [2024-04-23 02:59:24.517456] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517460] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=4096, cccid=4 00:14:45.885 [2024-04-23 02:59:24.517466] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895910) on tqpair(0x84e600): expected_datao=0, payload_size=4096 00:14:45.885 [2024-04-23 02:59:24.517471] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517479] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517483] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517492] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.885 [2024-04-23 02:59:24.517499] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.885 [2024-04-23 02:59:24.517503] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517507] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.885 [2024-04-23 02:59:24.517518] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:45.885 [2024-04-23 02:59:24.517532] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.517544] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.517552] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517557] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.517565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.885 [2024-04-23 02:59:24.517587] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.885 [2024-04-23 02:59:24.517937] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.885 [2024-04-23 02:59:24.517954] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.885 [2024-04-23 02:59:24.517959] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517964] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=4096, cccid=4 00:14:45.885 [2024-04-23 02:59:24.517969] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895910) on tqpair(0x84e600): expected_datao=0, payload_size=4096 00:14:45.885 [2024-04-23 02:59:24.517974] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517982] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517986] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.517996] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.885 [2024-04-23 02:59:24.518002] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.885 [2024-04-23 02:59:24.518006] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518010] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.885 [2024-04-23 02:59:24.518026] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518039] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518048] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518052] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.518061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.885 [2024-04-23 02:59:24.518083] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.885 [2024-04-23 02:59:24.518499] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.885 [2024-04-23 02:59:24.518515] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.885 [2024-04-23 02:59:24.518520] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518524] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=4096, cccid=4 00:14:45.885 [2024-04-23 02:59:24.518529] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895910) on tqpair(0x84e600): expected_datao=0, payload_size=4096 00:14:45.885 [2024-04-23 02:59:24.518534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518541] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518545] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518554] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.885 [2024-04-23 02:59:24.518560] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.885 [2024-04-23 02:59:24.518564] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518568] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.885 [2024-04-23 02:59:24.518578] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518587] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518601] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518608] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518614] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518620] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:45.885 [2024-04-23 02:59:24.518625] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:45.885 [2024-04-23 02:59:24.518630] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:45.885 [2024-04-23 02:59:24.518658] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518663] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.518683] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.885 [2024-04-23 02:59:24.518690] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.885 [2024-04-23 02:59:24.518698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84e600) 00:14:45.885 [2024-04-23 02:59:24.518704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:45.885 [2024-04-23 02:59:24.518732] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.886 [2024-04-23 02:59:24.518756] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895a70, cid 5, qid 0 00:14:45.886 [2024-04-23 02:59:24.519110] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.519141] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.519147] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.519152] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.519159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.519166] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.519170] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.519174] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895a70) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.519186] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.519191] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.519200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.519221] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895a70, cid 5, qid 0 00:14:45.886 [2024-04-23 02:59:24.519647] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.519671] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.519676] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.519680] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895a70) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.519692] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.519697] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.519705] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.519725] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895a70, cid 5, qid 0 00:14:45.886 [2024-04-23 02:59:24.520041] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.520059] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.520064] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520069] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895a70) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.520081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520086] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.520094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.520117] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895a70, cid 5, qid 0 00:14:45.886 [2024-04-23 02:59:24.520570] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.520587] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.520592] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520596] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895a70) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.520612] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520618] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.520641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.520649] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.520660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.520667] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520671] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.520677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.520685] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.520689] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x84e600) 00:14:45.886 [2024-04-23 02:59:24.520695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.886 [2024-04-23 02:59:24.520718] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895a70, cid 5, qid 0 00:14:45.886 [2024-04-23 02:59:24.520725] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895910, cid 4, qid 0 00:14:45.886 [2024-04-23 02:59:24.520730] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895bd0, cid 6, qid 0 00:14:45.886 [2024-04-23 02:59:24.520751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895d30, cid 7, qid 0 00:14:45.886 [2024-04-23 02:59:24.525154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.886 [2024-04-23 02:59:24.525175] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.886 [2024-04-23 02:59:24.525181] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525185] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=8192, cccid=5 00:14:45.886 [2024-04-23 02:59:24.525190] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895a70) on tqpair(0x84e600): expected_datao=0, payload_size=8192 00:14:45.886 [2024-04-23 02:59:24.525196] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525204] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525209] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525215] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.886 [2024-04-23 02:59:24.525221] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.886 [2024-04-23 02:59:24.525225] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525229] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=512, cccid=4 00:14:45.886 [2024-04-23 02:59:24.525234] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895910) on tqpair(0x84e600): expected_datao=0, payload_size=512 00:14:45.886 [2024-04-23 02:59:24.525239] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525246] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525250] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525256] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.886 [2024-04-23 02:59:24.525263] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.886 [2024-04-23 02:59:24.525266] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525270] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=512, cccid=6 00:14:45.886 [2024-04-23 02:59:24.525275] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895bd0) on tqpair(0x84e600): expected_datao=0, payload_size=512 00:14:45.886 [2024-04-23 02:59:24.525283] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525289] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525293] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525299] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:45.886 [2024-04-23 02:59:24.525306] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:45.886 [2024-04-23 02:59:24.525309] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525313] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x84e600): datao=0, datal=4096, cccid=7 00:14:45.886 [2024-04-23 02:59:24.525318] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x895d30) on tqpair(0x84e600): expected_datao=0, payload_size=4096 00:14:45.886 [2024-04-23 02:59:24.525323] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525330] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525334] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525340] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.525346] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.525350] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525354] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895a70) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.525373] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.525381] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.525384] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525389] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895910) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.525399] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.525406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.525410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525414] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895bd0) on tqpair=0x84e600 00:14:45.886 [2024-04-23 02:59:24.525422] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.886 [2024-04-23 02:59:24.525428] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.886 [2024-04-23 02:59:24.525432] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.886 [2024-04-23 02:59:24.525436] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895d30) on tqpair=0x84e600 00:14:45.886 ===================================================== 00:14:45.886 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:45.886 ===================================================== 00:14:45.886 Controller Capabilities/Features 00:14:45.886 ================================ 00:14:45.886 Vendor ID: 8086 00:14:45.886 Subsystem Vendor ID: 8086 00:14:45.886 Serial Number: SPDK00000000000001 00:14:45.886 Model Number: SPDK bdev Controller 00:14:45.886 Firmware Version: 24.05 00:14:45.886 Recommended Arb Burst: 6 00:14:45.886 IEEE OUI Identifier: e4 d2 5c 00:14:45.886 Multi-path I/O 00:14:45.886 May have multiple subsystem ports: Yes 00:14:45.886 May have multiple controllers: Yes 00:14:45.886 Associated with SR-IOV VF: No 00:14:45.886 Max Data Transfer Size: 131072 00:14:45.886 Max Number of Namespaces: 32 00:14:45.887 Max Number of I/O Queues: 127 00:14:45.887 NVMe Specification Version (VS): 1.3 00:14:45.887 NVMe Specification Version (Identify): 1.3 00:14:45.887 Maximum Queue Entries: 128 00:14:45.887 Contiguous Queues Required: Yes 00:14:45.887 Arbitration Mechanisms Supported 00:14:45.887 Weighted Round Robin: Not Supported 00:14:45.887 Vendor Specific: Not Supported 00:14:45.887 Reset Timeout: 15000 ms 00:14:45.887 Doorbell Stride: 4 bytes 00:14:45.887 NVM Subsystem Reset: Not Supported 00:14:45.887 Command Sets Supported 00:14:45.887 NVM Command Set: Supported 00:14:45.887 Boot Partition: Not Supported 00:14:45.887 Memory Page Size Minimum: 4096 bytes 00:14:45.887 Memory Page Size Maximum: 4096 bytes 00:14:45.887 Persistent Memory Region: Not Supported 00:14:45.887 Optional Asynchronous Events Supported 00:14:45.887 Namespace Attribute Notices: Supported 00:14:45.887 Firmware Activation Notices: Not Supported 00:14:45.887 ANA Change Notices: Not Supported 00:14:45.887 PLE Aggregate Log Change Notices: Not Supported 00:14:45.887 LBA Status Info Alert Notices: Not Supported 00:14:45.887 EGE Aggregate Log Change Notices: Not Supported 00:14:45.887 Normal NVM Subsystem Shutdown event: Not Supported 00:14:45.887 Zone Descriptor Change Notices: Not Supported 00:14:45.887 Discovery Log Change Notices: Not Supported 00:14:45.887 Controller Attributes 00:14:45.887 128-bit Host Identifier: Supported 00:14:45.887 Non-Operational Permissive Mode: Not Supported 00:14:45.887 NVM Sets: Not Supported 00:14:45.887 Read Recovery Levels: Not Supported 00:14:45.887 Endurance Groups: Not Supported 00:14:45.887 Predictable Latency Mode: Not Supported 00:14:45.887 Traffic Based Keep ALive: Not Supported 00:14:45.887 Namespace Granularity: Not Supported 00:14:45.887 SQ Associations: Not Supported 00:14:45.887 UUID List: Not Supported 00:14:45.887 Multi-Domain Subsystem: Not Supported 00:14:45.887 Fixed Capacity Management: Not Supported 00:14:45.887 Variable Capacity Management: Not Supported 00:14:45.887 Delete Endurance Group: Not Supported 00:14:45.887 Delete NVM Set: Not Supported 00:14:45.887 Extended LBA Formats Supported: Not Supported 00:14:45.887 Flexible Data Placement Supported: Not Supported 00:14:45.887 00:14:45.887 Controller Memory Buffer Support 00:14:45.887 ================================ 00:14:45.887 Supported: No 00:14:45.887 00:14:45.887 Persistent Memory Region Support 00:14:45.887 ================================ 00:14:45.887 Supported: No 00:14:45.887 00:14:45.887 Admin Command Set Attributes 00:14:45.887 ============================ 00:14:45.887 Security Send/Receive: Not Supported 00:14:45.887 Format NVM: Not Supported 00:14:45.887 Firmware Activate/Download: Not Supported 00:14:45.887 Namespace Management: Not Supported 00:14:45.887 Device Self-Test: Not Supported 00:14:45.887 Directives: Not Supported 00:14:45.887 NVMe-MI: Not Supported 00:14:45.887 Virtualization Management: Not Supported 00:14:45.887 Doorbell Buffer Config: Not Supported 00:14:45.887 Get LBA Status Capability: Not Supported 00:14:45.887 Command & Feature Lockdown Capability: Not Supported 00:14:45.887 Abort Command Limit: 4 00:14:45.887 Async Event Request Limit: 4 00:14:45.887 Number of Firmware Slots: N/A 00:14:45.887 Firmware Slot 1 Read-Only: N/A 00:14:45.887 Firmware Activation Without Reset: N/A 00:14:45.887 Multiple Update Detection Support: N/A 00:14:45.887 Firmware Update Granularity: No Information Provided 00:14:45.887 Per-Namespace SMART Log: No 00:14:45.887 Asymmetric Namespace Access Log Page: Not Supported 00:14:45.887 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:45.887 Command Effects Log Page: Supported 00:14:45.887 Get Log Page Extended Data: Supported 00:14:45.887 Telemetry Log Pages: Not Supported 00:14:45.887 Persistent Event Log Pages: Not Supported 00:14:45.887 Supported Log Pages Log Page: May Support 00:14:45.887 Commands Supported & Effects Log Page: Not Supported 00:14:45.887 Feature Identifiers & Effects Log Page:May Support 00:14:45.887 NVMe-MI Commands & Effects Log Page: May Support 00:14:45.887 Data Area 4 for Telemetry Log: Not Supported 00:14:45.887 Error Log Page Entries Supported: 128 00:14:45.887 Keep Alive: Supported 00:14:45.887 Keep Alive Granularity: 10000 ms 00:14:45.887 00:14:45.887 NVM Command Set Attributes 00:14:45.887 ========================== 00:14:45.887 Submission Queue Entry Size 00:14:45.887 Max: 64 00:14:45.887 Min: 64 00:14:45.887 Completion Queue Entry Size 00:14:45.887 Max: 16 00:14:45.887 Min: 16 00:14:45.887 Number of Namespaces: 32 00:14:45.887 Compare Command: Supported 00:14:45.887 Write Uncorrectable Command: Not Supported 00:14:45.887 Dataset Management Command: Supported 00:14:45.887 Write Zeroes Command: Supported 00:14:45.887 Set Features Save Field: Not Supported 00:14:45.887 Reservations: Supported 00:14:45.887 Timestamp: Not Supported 00:14:45.887 Copy: Supported 00:14:45.887 Volatile Write Cache: Present 00:14:45.887 Atomic Write Unit (Normal): 1 00:14:45.887 Atomic Write Unit (PFail): 1 00:14:45.887 Atomic Compare & Write Unit: 1 00:14:45.887 Fused Compare & Write: Supported 00:14:45.887 Scatter-Gather List 00:14:45.887 SGL Command Set: Supported 00:14:45.887 SGL Keyed: Supported 00:14:45.887 SGL Bit Bucket Descriptor: Not Supported 00:14:45.887 SGL Metadata Pointer: Not Supported 00:14:45.887 Oversized SGL: Not Supported 00:14:45.887 SGL Metadata Address: Not Supported 00:14:45.887 SGL Offset: Supported 00:14:45.887 Transport SGL Data Block: Not Supported 00:14:45.887 Replay Protected Memory Block: Not Supported 00:14:45.887 00:14:45.887 Firmware Slot Information 00:14:45.887 ========================= 00:14:45.887 Active slot: 1 00:14:45.887 Slot 1 Firmware Revision: 24.05 00:14:45.887 00:14:45.887 00:14:45.887 Commands Supported and Effects 00:14:45.887 ============================== 00:14:45.887 Admin Commands 00:14:45.887 -------------- 00:14:45.887 Get Log Page (02h): Supported 00:14:45.887 Identify (06h): Supported 00:14:45.887 Abort (08h): Supported 00:14:45.887 Set Features (09h): Supported 00:14:45.887 Get Features (0Ah): Supported 00:14:45.887 Asynchronous Event Request (0Ch): Supported 00:14:45.887 Keep Alive (18h): Supported 00:14:45.887 I/O Commands 00:14:45.887 ------------ 00:14:45.887 Flush (00h): Supported LBA-Change 00:14:45.887 Write (01h): Supported LBA-Change 00:14:45.887 Read (02h): Supported 00:14:45.887 Compare (05h): Supported 00:14:45.887 Write Zeroes (08h): Supported LBA-Change 00:14:45.887 Dataset Management (09h): Supported LBA-Change 00:14:45.887 Copy (19h): Supported LBA-Change 00:14:45.887 Unknown (79h): Supported LBA-Change 00:14:45.887 Unknown (7Ah): Supported 00:14:45.887 00:14:45.887 Error Log 00:14:45.887 ========= 00:14:45.887 00:14:45.887 Arbitration 00:14:45.887 =========== 00:14:45.887 Arbitration Burst: 1 00:14:45.887 00:14:45.887 Power Management 00:14:45.887 ================ 00:14:45.887 Number of Power States: 1 00:14:45.887 Current Power State: Power State #0 00:14:45.887 Power State #0: 00:14:45.887 Max Power: 0.00 W 00:14:45.887 Non-Operational State: Operational 00:14:45.887 Entry Latency: Not Reported 00:14:45.887 Exit Latency: Not Reported 00:14:45.887 Relative Read Throughput: 0 00:14:45.887 Relative Read Latency: 0 00:14:45.887 Relative Write Throughput: 0 00:14:45.887 Relative Write Latency: 0 00:14:45.887 Idle Power: Not Reported 00:14:45.887 Active Power: Not Reported 00:14:45.887 Non-Operational Permissive Mode: Not Supported 00:14:45.887 00:14:45.887 Health Information 00:14:45.887 ================== 00:14:45.887 Critical Warnings: 00:14:45.887 Available Spare Space: OK 00:14:45.887 Temperature: OK 00:14:45.887 Device Reliability: OK 00:14:45.887 Read Only: No 00:14:45.887 Volatile Memory Backup: OK 00:14:45.887 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:45.887 Temperature Threshold: [2024-04-23 02:59:24.525554] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.887 [2024-04-23 02:59:24.525561] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x84e600) 00:14:45.887 [2024-04-23 02:59:24.525571] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.887 [2024-04-23 02:59:24.525599] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x895d30, cid 7, qid 0 00:14:45.887 [2024-04-23 02:59:24.526154] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.887 [2024-04-23 02:59:24.526172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.887 [2024-04-23 02:59:24.526177] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.887 [2024-04-23 02:59:24.526181] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x895d30) on tqpair=0x84e600 00:14:45.887 [2024-04-23 02:59:24.526219] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:45.887 [2024-04-23 02:59:24.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.888 [2024-04-23 02:59:24.526242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.888 [2024-04-23 02:59:24.526249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.888 [2024-04-23 02:59:24.526255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:45.888 [2024-04-23 02:59:24.526265] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526270] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.526283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.526308] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.526573] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.526590] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.526595] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526600] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.526608] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526613] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526617] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.526625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.526651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.526726] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.526733] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.526737] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526741] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.526747] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:45.888 [2024-04-23 02:59:24.526753] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:45.888 [2024-04-23 02:59:24.526763] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526768] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.526772] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.526780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.526799] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.527100] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.527116] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.527121] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527125] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.527150] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527156] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527160] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.527168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.527190] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.527464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.527480] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.527485] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527490] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.527504] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527509] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527513] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.527521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.527542] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.527604] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.527612] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.527615] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527620] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.527631] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527636] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.527640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.527648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.527666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.528015] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.528031] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.528036] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528040] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.528052] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528057] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528062] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.528070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.528089] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.528248] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.528258] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.528262] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528266] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.528277] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528282] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528286] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.528295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.528315] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.528693] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.528709] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.528713] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528718] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.528730] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528736] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528740] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.528748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.528768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.528821] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.528828] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.528832] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528836] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.528847] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528852] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.528856] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.528864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.528882] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.529111] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.529118] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.529122] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.533160] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.533200] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.533207] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.533211] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x84e600) 00:14:45.888 [2024-04-23 02:59:24.533220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:45.888 [2024-04-23 02:59:24.533249] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8957b0, cid 3, qid 0 00:14:45.888 [2024-04-23 02:59:24.533310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:45.888 [2024-04-23 02:59:24.533318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:45.888 [2024-04-23 02:59:24.533322] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:45.888 [2024-04-23 02:59:24.533326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8957b0) on tqpair=0x84e600 00:14:45.888 [2024-04-23 02:59:24.533335] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:14:45.888 0 Kelvin (-273 Celsius) 00:14:45.888 Available Spare: 0% 00:14:45.888 Available Spare Threshold: 0% 00:14:45.889 Life Percentage Used: 0% 00:14:45.889 Data Units Read: 0 00:14:45.889 Data Units Written: 0 00:14:45.889 Host Read Commands: 0 00:14:45.889 Host Write Commands: 0 00:14:45.889 Controller Busy Time: 0 minutes 00:14:45.889 Power Cycles: 0 00:14:45.889 Power On Hours: 0 hours 00:14:45.889 Unsafe Shutdowns: 0 00:14:45.889 Unrecoverable Media Errors: 0 00:14:45.889 Lifetime Error Log Entries: 0 00:14:45.889 Warning Temperature Time: 0 minutes 00:14:45.889 Critical Temperature Time: 0 minutes 00:14:45.889 00:14:45.889 Number of Queues 00:14:45.889 ================ 00:14:45.889 Number of I/O Submission Queues: 127 00:14:45.889 Number of I/O Completion Queues: 127 00:14:45.889 00:14:45.889 Active Namespaces 00:14:45.889 ================= 00:14:45.889 Namespace ID:1 00:14:45.889 Error Recovery Timeout: Unlimited 00:14:45.889 Command Set Identifier: NVM (00h) 00:14:45.889 Deallocate: Supported 00:14:45.889 Deallocated/Unwritten Error: Not Supported 00:14:45.889 Deallocated Read Value: Unknown 00:14:45.889 Deallocate in Write Zeroes: Not Supported 00:14:45.889 Deallocated Guard Field: 0xFFFF 00:14:45.889 Flush: Supported 00:14:45.889 Reservation: Supported 00:14:45.889 Namespace Sharing Capabilities: Multiple Controllers 00:14:45.889 Size (in LBAs): 131072 (0GiB) 00:14:45.889 Capacity (in LBAs): 131072 (0GiB) 00:14:45.889 Utilization (in LBAs): 131072 (0GiB) 00:14:45.889 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:45.889 EUI64: ABCDEF0123456789 00:14:45.889 UUID: 7e655f70-862e-4938-be89-7fe4a480c8ea 00:14:45.889 Thin Provisioning: Not Supported 00:14:45.889 Per-NS Atomic Units: Yes 00:14:45.889 Atomic Boundary Size (Normal): 0 00:14:45.889 Atomic Boundary Size (PFail): 0 00:14:45.889 Atomic Boundary Offset: 0 00:14:45.889 Maximum Single Source Range Length: 65535 00:14:45.889 Maximum Copy Length: 65535 00:14:45.889 Maximum Source Range Count: 1 00:14:45.889 NGUID/EUI64 Never Reused: No 00:14:45.889 Namespace Write Protected: No 00:14:45.889 Number of LBA Formats: 1 00:14:45.889 Current LBA Format: LBA Format #00 00:14:45.889 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:45.889 00:14:45.889 02:59:24 -- host/identify.sh@51 -- # sync 00:14:45.889 02:59:24 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.889 02:59:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:45.889 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.889 02:59:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:45.889 02:59:24 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:45.889 02:59:24 -- host/identify.sh@56 -- # nvmftestfini 00:14:45.889 02:59:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:45.889 02:59:24 -- nvmf/common.sh@117 -- # sync 00:14:45.889 02:59:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.889 02:59:24 -- nvmf/common.sh@120 -- # set +e 00:14:45.889 02:59:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.889 02:59:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.889 rmmod nvme_tcp 00:14:45.889 rmmod nvme_fabrics 00:14:45.889 rmmod nvme_keyring 00:14:45.889 02:59:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.889 02:59:24 -- nvmf/common.sh@124 -- # set -e 00:14:45.889 02:59:24 -- nvmf/common.sh@125 -- # return 0 00:14:45.889 02:59:24 -- nvmf/common.sh@478 -- # '[' -n 86948 ']' 00:14:45.889 02:59:24 -- nvmf/common.sh@479 -- # killprocess 86948 00:14:45.889 02:59:24 -- common/autotest_common.sh@936 -- # '[' -z 86948 ']' 00:14:45.889 02:59:24 -- common/autotest_common.sh@940 -- # kill -0 86948 00:14:45.889 02:59:24 -- common/autotest_common.sh@941 -- # uname 00:14:45.889 02:59:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:45.889 02:59:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86948 00:14:45.889 killing process with pid 86948 00:14:45.889 02:59:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:45.889 02:59:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:45.889 02:59:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86948' 00:14:45.889 02:59:24 -- common/autotest_common.sh@955 -- # kill 86948 00:14:45.889 [2024-04-23 02:59:24.682234] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:45.889 02:59:24 -- common/autotest_common.sh@960 -- # wait 86948 00:14:45.889 02:59:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:45.889 02:59:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:45.889 02:59:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:45.889 02:59:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.889 02:59:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.889 02:59:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.889 02:59:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.889 02:59:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.889 02:59:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:14:45.889 00:14:45.889 real 0m1.679s 00:14:45.889 user 0m3.877s 00:14:45.889 sys 0m0.553s 00:14:45.889 02:59:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:45.889 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.889 ************************************ 00:14:45.889 END TEST nvmf_identify 00:14:45.889 ************************************ 00:14:45.889 02:59:24 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:45.889 02:59:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:45.889 02:59:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:45.889 02:59:24 -- common/autotest_common.sh@10 -- # set +x 00:14:45.889 ************************************ 00:14:45.889 START TEST nvmf_perf 00:14:45.889 ************************************ 00:14:45.889 02:59:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:46.149 * Looking for test storage... 00:14:46.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:46.149 02:59:25 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.149 02:59:25 -- nvmf/common.sh@7 -- # uname -s 00:14:46.149 02:59:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.149 02:59:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.149 02:59:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.149 02:59:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.149 02:59:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.149 02:59:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.149 02:59:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.149 02:59:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.149 02:59:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.149 02:59:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:14:46.149 02:59:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:14:46.149 02:59:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.149 02:59:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.149 02:59:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.149 02:59:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.149 02:59:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.149 02:59:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.149 02:59:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.149 02:59:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.149 02:59:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.149 02:59:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.149 02:59:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.149 02:59:25 -- paths/export.sh@5 -- # export PATH 00:14:46.149 02:59:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.149 02:59:25 -- nvmf/common.sh@47 -- # : 0 00:14:46.149 02:59:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.149 02:59:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.149 02:59:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.149 02:59:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.149 02:59:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.149 02:59:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.149 02:59:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.149 02:59:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.149 02:59:25 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.149 02:59:25 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.149 02:59:25 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.149 02:59:25 -- host/perf.sh@17 -- # nvmftestinit 00:14:46.149 02:59:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:46.149 02:59:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.149 02:59:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:46.149 02:59:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:46.149 02:59:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:46.149 02:59:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.149 02:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.149 02:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.149 02:59:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:14:46.149 02:59:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:14:46.149 02:59:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.149 02:59:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:46.149 02:59:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:46.149 02:59:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:14:46.149 02:59:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.149 02:59:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.149 02:59:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.149 02:59:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.149 02:59:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.149 02:59:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.149 02:59:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.149 02:59:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.149 02:59:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:14:46.149 02:59:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:14:46.149 Cannot find device "nvmf_tgt_br" 00:14:46.149 02:59:25 -- nvmf/common.sh@155 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.149 Cannot find device "nvmf_tgt_br2" 00:14:46.149 02:59:25 -- nvmf/common.sh@156 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:14:46.149 02:59:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:14:46.149 Cannot find device "nvmf_tgt_br" 00:14:46.149 02:59:25 -- nvmf/common.sh@158 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:14:46.149 Cannot find device "nvmf_tgt_br2" 00:14:46.149 02:59:25 -- nvmf/common.sh@159 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:14:46.149 02:59:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:14:46.149 02:59:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.149 02:59:25 -- nvmf/common.sh@162 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.149 02:59:25 -- nvmf/common.sh@163 -- # true 00:14:46.149 02:59:25 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.149 02:59:25 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.149 02:59:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.149 02:59:25 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.409 02:59:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.409 02:59:25 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.409 02:59:25 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.409 02:59:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:46.409 02:59:25 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:46.409 02:59:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:14:46.409 02:59:25 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:14:46.409 02:59:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:14:46.409 02:59:25 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:14:46.409 02:59:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.409 02:59:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.409 02:59:25 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.409 02:59:25 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:14:46.409 02:59:25 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:14:46.409 02:59:25 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.409 02:59:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.409 02:59:25 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.409 02:59:25 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.409 02:59:25 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.409 02:59:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:14:46.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:14:46.409 00:14:46.409 --- 10.0.0.2 ping statistics --- 00:14:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.409 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:14:46.409 02:59:25 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:14:46.409 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.409 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:46.409 00:14:46.409 --- 10.0.0.3 ping statistics --- 00:14:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.409 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:46.409 02:59:25 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.409 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.409 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:46.409 00:14:46.409 --- 10.0.0.1 ping statistics --- 00:14:46.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.409 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:46.409 02:59:25 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.409 02:59:25 -- nvmf/common.sh@422 -- # return 0 00:14:46.409 02:59:25 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:46.409 02:59:25 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.409 02:59:25 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:46.409 02:59:25 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:46.409 02:59:25 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.409 02:59:25 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:46.409 02:59:25 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:46.409 02:59:25 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:46.409 02:59:25 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:46.409 02:59:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:46.409 02:59:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.409 02:59:25 -- nvmf/common.sh@470 -- # nvmfpid=87155 00:14:46.409 02:59:25 -- nvmf/common.sh@471 -- # waitforlisten 87155 00:14:46.409 02:59:25 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.409 02:59:25 -- common/autotest_common.sh@817 -- # '[' -z 87155 ']' 00:14:46.409 02:59:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.409 02:59:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:46.409 02:59:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.409 02:59:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:46.409 02:59:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.409 [2024-04-23 02:59:25.532342] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:14:46.409 [2024-04-23 02:59:25.532872] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.668 [2024-04-23 02:59:25.651256] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:14:46.668 [2024-04-23 02:59:25.669912] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.668 [2024-04-23 02:59:25.704882] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.668 [2024-04-23 02:59:25.704952] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.668 [2024-04-23 02:59:25.704979] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.668 [2024-04-23 02:59:25.704987] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.668 [2024-04-23 02:59:25.704994] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.668 [2024-04-23 02:59:25.705167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.668 [2024-04-23 02:59:25.705269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.668 [2024-04-23 02:59:25.705670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.668 [2024-04-23 02:59:25.705707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.668 02:59:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:46.668 02:59:25 -- common/autotest_common.sh@850 -- # return 0 00:14:46.668 02:59:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:46.668 02:59:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:46.668 02:59:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.668 02:59:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:46.668 02:59:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:46.668 02:59:25 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:47.236 02:59:26 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:47.236 02:59:26 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:47.495 02:59:26 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:47.495 02:59:26 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:47.754 02:59:26 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:47.754 02:59:26 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:47.754 02:59:26 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:47.754 02:59:26 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:47.754 02:59:26 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:48.013 [2024-04-23 02:59:26.994704] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:48.013 02:59:27 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:48.272 02:59:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:48.272 02:59:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:48.531 02:59:27 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:48.531 02:59:27 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:48.790 02:59:27 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:48.790 [2024-04-23 02:59:27.899929] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:48.790 02:59:27 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.050 02:59:28 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:49.050 02:59:28 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:49.050 02:59:28 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:49.050 02:59:28 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:50.426 Initializing NVMe Controllers 00:14:50.426 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:50.426 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:50.426 Initialization complete. Launching workers. 00:14:50.426 ======================================================== 00:14:50.426 Latency(us) 00:14:50.426 Device Information : IOPS MiB/s Average min max 00:14:50.426 PCIE (0000:00:10.0) NSID 1 from core 0: 24767.98 96.75 1291.34 309.69 7604.10 00:14:50.426 ======================================================== 00:14:50.426 Total : 24767.98 96.75 1291.34 309.69 7604.10 00:14:50.426 00:14:50.426 02:59:29 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:51.802 Initializing NVMe Controllers 00:14:51.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:51.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:51.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:51.802 Initialization complete. Launching workers. 00:14:51.802 ======================================================== 00:14:51.802 Latency(us) 00:14:51.802 Device Information : IOPS MiB/s Average min max 00:14:51.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3289.93 12.85 301.23 117.44 7124.64 00:14:51.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 124.00 0.48 8136.44 5063.76 12027.90 00:14:51.802 ======================================================== 00:14:51.802 Total : 3413.92 13.34 585.81 117.44 12027.90 00:14:51.802 00:14:51.802 02:59:30 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:53.179 Initializing NVMe Controllers 00:14:53.179 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:53.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:53.179 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:53.179 Initialization complete. Launching workers. 00:14:53.179 ======================================================== 00:14:53.179 Latency(us) 00:14:53.179 Device Information : IOPS MiB/s Average min max 00:14:53.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8545.02 33.38 3746.47 489.16 7721.55 00:14:53.179 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4019.19 15.70 8000.27 6145.09 9418.29 00:14:53.179 ======================================================== 00:14:53.179 Total : 12564.21 49.08 5107.22 489.16 9418.29 00:14:53.179 00:14:53.179 02:59:31 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:53.179 02:59:31 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:55.714 Initializing NVMe Controllers 00:14:55.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.714 Controller IO queue size 128, less than required. 00:14:55.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.714 Controller IO queue size 128, less than required. 00:14:55.714 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:55.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:55.714 Initialization complete. Launching workers. 00:14:55.714 ======================================================== 00:14:55.714 Latency(us) 00:14:55.714 Device Information : IOPS MiB/s Average min max 00:14:55.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1837.96 459.49 70561.27 40818.62 127643.54 00:14:55.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 665.40 166.35 199266.52 105027.67 331942.71 00:14:55.714 ======================================================== 00:14:55.714 Total : 2503.36 625.84 104771.46 40818.62 331942.71 00:14:55.714 00:14:55.715 02:59:34 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:55.715 No valid NVMe controllers or AIO or URING devices found 00:14:55.715 Initializing NVMe Controllers 00:14:55.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:55.715 Controller IO queue size 128, less than required. 00:14:55.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.715 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:55.715 Controller IO queue size 128, less than required. 00:14:55.715 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:55.715 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:55.715 WARNING: Some requested NVMe devices were skipped 00:14:55.715 02:59:34 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:58.249 Initializing NVMe Controllers 00:14:58.249 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:58.249 Controller IO queue size 128, less than required. 00:14:58.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:58.249 Controller IO queue size 128, less than required. 00:14:58.249 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:58.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:58.249 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:58.249 Initialization complete. Launching workers. 00:14:58.249 00:14:58.249 ==================== 00:14:58.249 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:58.249 TCP transport: 00:14:58.249 polls: 8134 00:14:58.249 idle_polls: 0 00:14:58.249 sock_completions: 8134 00:14:58.249 nvme_completions: 6851 00:14:58.249 submitted_requests: 10292 00:14:58.249 queued_requests: 1 00:14:58.249 00:14:58.249 ==================== 00:14:58.249 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:58.249 TCP transport: 00:14:58.249 polls: 8095 00:14:58.249 idle_polls: 0 00:14:58.249 sock_completions: 8095 00:14:58.249 nvme_completions: 6809 00:14:58.249 submitted_requests: 10202 00:14:58.249 queued_requests: 1 00:14:58.249 ======================================================== 00:14:58.250 Latency(us) 00:14:58.250 Device Information : IOPS MiB/s Average min max 00:14:58.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1710.32 427.58 76641.38 38741.30 128759.55 00:14:58.250 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1699.83 424.96 76029.68 38853.74 120080.46 00:14:58.250 ======================================================== 00:14:58.250 Total : 3410.14 852.54 76336.47 38741.30 128759.55 00:14:58.250 00:14:58.250 02:59:37 -- host/perf.sh@66 -- # sync 00:14:58.250 02:59:37 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:58.508 02:59:37 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:58.508 02:59:37 -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:14:58.508 02:59:37 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:58.767 02:59:37 -- host/perf.sh@72 -- # ls_guid=0707b739-f6aa-46b5-8c55-38a8a96f9516 00:14:58.767 02:59:37 -- host/perf.sh@73 -- # get_lvs_free_mb 0707b739-f6aa-46b5-8c55-38a8a96f9516 00:14:58.767 02:59:37 -- common/autotest_common.sh@1350 -- # local lvs_uuid=0707b739-f6aa-46b5-8c55-38a8a96f9516 00:14:58.767 02:59:37 -- common/autotest_common.sh@1351 -- # local lvs_info 00:14:58.767 02:59:37 -- common/autotest_common.sh@1352 -- # local fc 00:14:58.767 02:59:37 -- common/autotest_common.sh@1353 -- # local cs 00:14:58.767 02:59:37 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:59.026 02:59:38 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:14:59.026 { 00:14:59.026 "uuid": "0707b739-f6aa-46b5-8c55-38a8a96f9516", 00:14:59.026 "name": "lvs_0", 00:14:59.026 "base_bdev": "Nvme0n1", 00:14:59.026 "total_data_clusters": 1278, 00:14:59.026 "free_clusters": 1278, 00:14:59.026 "block_size": 4096, 00:14:59.026 "cluster_size": 4194304 00:14:59.026 } 00:14:59.026 ]' 00:14:59.026 02:59:38 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="0707b739-f6aa-46b5-8c55-38a8a96f9516") .free_clusters' 00:14:59.315 02:59:38 -- common/autotest_common.sh@1355 -- # fc=1278 00:14:59.315 02:59:38 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="0707b739-f6aa-46b5-8c55-38a8a96f9516") .cluster_size' 00:14:59.315 5112 00:14:59.315 02:59:38 -- common/autotest_common.sh@1356 -- # cs=4194304 00:14:59.315 02:59:38 -- common/autotest_common.sh@1359 -- # free_mb=5112 00:14:59.315 02:59:38 -- common/autotest_common.sh@1360 -- # echo 5112 00:14:59.315 02:59:38 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:59.315 02:59:38 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0707b739-f6aa-46b5-8c55-38a8a96f9516 lbd_0 5112 00:14:59.574 02:59:38 -- host/perf.sh@80 -- # lb_guid=bb0eb308-4123-46fd-bf81-cb06db945632 00:14:59.574 02:59:38 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore bb0eb308-4123-46fd-bf81-cb06db945632 lvs_n_0 00:14:59.833 02:59:38 -- host/perf.sh@83 -- # ls_nested_guid=90204403-fb89-4f5b-af58-9c259ac201d8 00:14:59.833 02:59:38 -- host/perf.sh@84 -- # get_lvs_free_mb 90204403-fb89-4f5b-af58-9c259ac201d8 00:14:59.833 02:59:38 -- common/autotest_common.sh@1350 -- # local lvs_uuid=90204403-fb89-4f5b-af58-9c259ac201d8 00:14:59.833 02:59:38 -- common/autotest_common.sh@1351 -- # local lvs_info 00:14:59.833 02:59:38 -- common/autotest_common.sh@1352 -- # local fc 00:14:59.833 02:59:38 -- common/autotest_common.sh@1353 -- # local cs 00:14:59.833 02:59:38 -- common/autotest_common.sh@1354 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:00.092 02:59:39 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:15:00.092 { 00:15:00.092 "uuid": "0707b739-f6aa-46b5-8c55-38a8a96f9516", 00:15:00.092 "name": "lvs_0", 00:15:00.092 "base_bdev": "Nvme0n1", 00:15:00.092 "total_data_clusters": 1278, 00:15:00.092 "free_clusters": 0, 00:15:00.092 "block_size": 4096, 00:15:00.092 "cluster_size": 4194304 00:15:00.092 }, 00:15:00.092 { 00:15:00.092 "uuid": "90204403-fb89-4f5b-af58-9c259ac201d8", 00:15:00.092 "name": "lvs_n_0", 00:15:00.092 "base_bdev": "bb0eb308-4123-46fd-bf81-cb06db945632", 00:15:00.092 "total_data_clusters": 1276, 00:15:00.092 "free_clusters": 1276, 00:15:00.092 "block_size": 4096, 00:15:00.092 "cluster_size": 4194304 00:15:00.092 } 00:15:00.092 ]' 00:15:00.092 02:59:39 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="90204403-fb89-4f5b-af58-9c259ac201d8") .free_clusters' 00:15:00.092 02:59:39 -- common/autotest_common.sh@1355 -- # fc=1276 00:15:00.092 02:59:39 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="90204403-fb89-4f5b-af58-9c259ac201d8") .cluster_size' 00:15:00.092 02:59:39 -- common/autotest_common.sh@1356 -- # cs=4194304 00:15:00.092 5104 00:15:00.092 02:59:39 -- common/autotest_common.sh@1359 -- # free_mb=5104 00:15:00.092 02:59:39 -- common/autotest_common.sh@1360 -- # echo 5104 00:15:00.092 02:59:39 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:15:00.351 02:59:39 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 90204403-fb89-4f5b-af58-9c259ac201d8 lbd_nest_0 5104 00:15:00.351 02:59:39 -- host/perf.sh@88 -- # lb_nested_guid=b1d913eb-e337-4a90-b21d-de7cdf832c6c 00:15:00.351 02:59:39 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:00.610 02:59:39 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:15:00.610 02:59:39 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b1d913eb-e337-4a90-b21d-de7cdf832c6c 00:15:00.869 02:59:39 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.127 02:59:40 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:15:01.127 02:59:40 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:15:01.127 02:59:40 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:01.127 02:59:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:01.127 02:59:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:01.386 No valid NVMe controllers or AIO or URING devices found 00:15:01.645 Initializing NVMe Controllers 00:15:01.645 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.645 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:01.645 WARNING: Some requested NVMe devices were skipped 00:15:01.645 02:59:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:01.645 02:59:40 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.851 Initializing NVMe Controllers 00:15:13.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.852 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.852 Initialization complete. Launching workers. 00:15:13.852 ======================================================== 00:15:13.852 Latency(us) 00:15:13.852 Device Information : IOPS MiB/s Average min max 00:15:13.852 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1024.40 128.05 974.41 327.00 7741.91 00:15:13.852 ======================================================== 00:15:13.852 Total : 1024.40 128.05 974.41 327.00 7741.91 00:15:13.852 00:15:13.852 02:59:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:13.852 02:59:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:13.852 02:59:50 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.852 No valid NVMe controllers or AIO or URING devices found 00:15:13.852 Initializing NVMe Controllers 00:15:13.852 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.852 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:13.852 WARNING: Some requested NVMe devices were skipped 00:15:13.852 02:59:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:13.852 02:59:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:23.850 Initializing NVMe Controllers 00:15:23.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.850 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:23.850 Initialization complete. Launching workers. 00:15:23.850 ======================================================== 00:15:23.850 Latency(us) 00:15:23.850 Device Information : IOPS MiB/s Average min max 00:15:23.850 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1300.89 162.61 24617.74 6303.35 79594.47 00:15:23.850 ======================================================== 00:15:23.850 Total : 1300.89 162.61 24617.74 6303.35 79594.47 00:15:23.850 00:15:23.850 03:00:01 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:23.850 03:00:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:23.850 03:00:01 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:23.850 No valid NVMe controllers or AIO or URING devices found 00:15:23.850 Initializing NVMe Controllers 00:15:23.850 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.850 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:23.850 WARNING: Some requested NVMe devices were skipped 00:15:23.850 03:00:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:23.850 03:00:01 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:33.826 Initializing NVMe Controllers 00:15:33.826 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:33.826 Controller IO queue size 128, less than required. 00:15:33.826 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:33.826 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:33.826 Initialization complete. Launching workers. 00:15:33.826 ======================================================== 00:15:33.826 Latency(us) 00:15:33.826 Device Information : IOPS MiB/s Average min max 00:15:33.826 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3890.55 486.32 32918.60 7540.86 87663.03 00:15:33.826 ======================================================== 00:15:33.826 Total : 3890.55 486.32 32918.60 7540.86 87663.03 00:15:33.826 00:15:33.826 03:00:12 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.826 03:00:12 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b1d913eb-e337-4a90-b21d-de7cdf832c6c 00:15:33.826 03:00:12 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:33.826 03:00:12 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete bb0eb308-4123-46fd-bf81-cb06db945632 00:15:34.392 03:00:13 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:34.392 03:00:13 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:34.392 03:00:13 -- host/perf.sh@114 -- # nvmftestfini 00:15:34.392 03:00:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:34.392 03:00:13 -- nvmf/common.sh@117 -- # sync 00:15:34.392 03:00:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:34.392 03:00:13 -- nvmf/common.sh@120 -- # set +e 00:15:34.392 03:00:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:34.392 03:00:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:34.392 rmmod nvme_tcp 00:15:34.392 rmmod nvme_fabrics 00:15:34.392 rmmod nvme_keyring 00:15:34.392 03:00:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:34.392 03:00:13 -- nvmf/common.sh@124 -- # set -e 00:15:34.392 03:00:13 -- nvmf/common.sh@125 -- # return 0 00:15:34.392 03:00:13 -- nvmf/common.sh@478 -- # '[' -n 87155 ']' 00:15:34.392 03:00:13 -- nvmf/common.sh@479 -- # killprocess 87155 00:15:34.392 03:00:13 -- common/autotest_common.sh@936 -- # '[' -z 87155 ']' 00:15:34.392 03:00:13 -- common/autotest_common.sh@940 -- # kill -0 87155 00:15:34.392 03:00:13 -- common/autotest_common.sh@941 -- # uname 00:15:34.392 03:00:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:34.650 03:00:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87155 00:15:34.650 03:00:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:34.650 03:00:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:34.650 03:00:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87155' 00:15:34.650 killing process with pid 87155 00:15:34.650 03:00:13 -- common/autotest_common.sh@955 -- # kill 87155 00:15:34.650 03:00:13 -- common/autotest_common.sh@960 -- # wait 87155 00:15:36.026 03:00:14 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:36.026 03:00:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:36.026 03:00:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:36.026 03:00:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:36.026 03:00:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:36.026 03:00:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.026 03:00:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.026 03:00:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.026 03:00:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:36.026 00:15:36.026 real 0m49.788s 00:15:36.026 user 3m7.362s 00:15:36.026 sys 0m12.953s 00:15:36.026 03:00:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:36.026 03:00:14 -- common/autotest_common.sh@10 -- # set +x 00:15:36.026 ************************************ 00:15:36.026 END TEST nvmf_perf 00:15:36.027 ************************************ 00:15:36.027 03:00:14 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:36.027 03:00:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:36.027 03:00:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:36.027 03:00:14 -- common/autotest_common.sh@10 -- # set +x 00:15:36.027 ************************************ 00:15:36.027 START TEST nvmf_fio_host 00:15:36.027 ************************************ 00:15:36.027 03:00:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:36.027 * Looking for test storage... 00:15:36.027 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:36.027 03:00:15 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.027 03:00:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.027 03:00:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.027 03:00:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.027 03:00:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@5 -- # export PATH 00:15:36.027 03:00:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.027 03:00:15 -- nvmf/common.sh@7 -- # uname -s 00:15:36.027 03:00:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.027 03:00:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.027 03:00:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.027 03:00:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.027 03:00:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.027 03:00:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.027 03:00:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.027 03:00:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.027 03:00:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.027 03:00:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:15:36.027 03:00:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:15:36.027 03:00:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.027 03:00:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.027 03:00:15 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.027 03:00:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.027 03:00:15 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.027 03:00:15 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.027 03:00:15 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.027 03:00:15 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.027 03:00:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- paths/export.sh@5 -- # export PATH 00:15:36.027 03:00:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.027 03:00:15 -- nvmf/common.sh@47 -- # : 0 00:15:36.027 03:00:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:36.027 03:00:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:36.027 03:00:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.027 03:00:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.027 03:00:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.027 03:00:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:36.027 03:00:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:36.027 03:00:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:36.027 03:00:15 -- host/fio.sh@12 -- # nvmftestinit 00:15:36.027 03:00:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:36.027 03:00:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.027 03:00:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:36.027 03:00:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:36.027 03:00:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:36.027 03:00:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.027 03:00:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:36.027 03:00:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.027 03:00:15 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:36.027 03:00:15 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:36.027 03:00:15 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.027 03:00:15 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:36.027 03:00:15 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:36.027 03:00:15 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:36.027 03:00:15 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.027 03:00:15 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.027 03:00:15 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.027 03:00:15 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.027 03:00:15 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.027 03:00:15 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.027 03:00:15 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.027 03:00:15 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.027 03:00:15 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:36.027 03:00:15 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:36.027 Cannot find device "nvmf_tgt_br" 00:15:36.027 03:00:15 -- nvmf/common.sh@155 -- # true 00:15:36.027 03:00:15 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.027 Cannot find device "nvmf_tgt_br2" 00:15:36.027 03:00:15 -- nvmf/common.sh@156 -- # true 00:15:36.027 03:00:15 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:36.027 03:00:15 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:36.027 Cannot find device "nvmf_tgt_br" 00:15:36.027 03:00:15 -- nvmf/common.sh@158 -- # true 00:15:36.027 03:00:15 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:36.027 Cannot find device "nvmf_tgt_br2" 00:15:36.027 03:00:15 -- nvmf/common.sh@159 -- # true 00:15:36.027 03:00:15 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:36.287 03:00:15 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:36.287 03:00:15 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.287 03:00:15 -- nvmf/common.sh@162 -- # true 00:15:36.287 03:00:15 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.287 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.287 03:00:15 -- nvmf/common.sh@163 -- # true 00:15:36.287 03:00:15 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.287 03:00:15 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.287 03:00:15 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.287 03:00:15 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.287 03:00:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.287 03:00:15 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.287 03:00:15 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.287 03:00:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:36.287 03:00:15 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:36.287 03:00:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:36.287 03:00:15 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:36.287 03:00:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:36.287 03:00:15 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:36.287 03:00:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.287 03:00:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.287 03:00:15 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.287 03:00:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:36.287 03:00:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:36.287 03:00:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.287 03:00:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.287 03:00:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.287 03:00:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.287 03:00:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.287 03:00:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:36.287 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.287 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:15:36.287 00:15:36.287 --- 10.0.0.2 ping statistics --- 00:15:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.287 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:15:36.287 03:00:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:36.287 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.287 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:36.287 00:15:36.287 --- 10.0.0.3 ping statistics --- 00:15:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.287 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:36.287 03:00:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.287 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.287 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:36.287 00:15:36.287 --- 10.0.0.1 ping statistics --- 00:15:36.287 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.287 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:36.287 03:00:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.287 03:00:15 -- nvmf/common.sh@422 -- # return 0 00:15:36.287 03:00:15 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:36.287 03:00:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.287 03:00:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:36.287 03:00:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:36.287 03:00:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.287 03:00:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:36.287 03:00:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:36.546 03:00:15 -- host/fio.sh@14 -- # [[ y != y ]] 00:15:36.546 03:00:15 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:15:36.546 03:00:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:36.546 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.546 03:00:15 -- host/fio.sh@22 -- # nvmfpid=87965 00:15:36.546 03:00:15 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:36.546 03:00:15 -- host/fio.sh@26 -- # waitforlisten 87965 00:15:36.546 03:00:15 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:36.546 03:00:15 -- common/autotest_common.sh@817 -- # '[' -z 87965 ']' 00:15:36.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.546 03:00:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.546 03:00:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:36.546 03:00:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.546 03:00:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:36.546 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.546 [2024-04-23 03:00:15.529794] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:15:36.546 [2024-04-23 03:00:15.530358] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.546 [2024-04-23 03:00:15.653860] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:36.546 [2024-04-23 03:00:15.671832] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.805 [2024-04-23 03:00:15.714517] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.805 [2024-04-23 03:00:15.714824] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.805 [2024-04-23 03:00:15.715100] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.805 [2024-04-23 03:00:15.715283] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.805 [2024-04-23 03:00:15.715428] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.805 [2024-04-23 03:00:15.715600] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.805 [2024-04-23 03:00:15.716206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.805 [2024-04-23 03:00:15.716285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.805 [2024-04-23 03:00:15.716339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.805 03:00:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:36.805 03:00:15 -- common/autotest_common.sh@850 -- # return 0 00:15:36.805 03:00:15 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 [2024-04-23 03:00:15.803224] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:15:36.805 03:00:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 03:00:15 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 Malloc1 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 [2024-04-23 03:00:15.900116] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:36.805 03:00:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:36.805 03:00:15 -- common/autotest_common.sh@10 -- # set +x 00:15:36.805 03:00:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:36.805 03:00:15 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:36.805 03:00:15 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:36.805 03:00:15 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:36.805 03:00:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:36.805 03:00:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:36.805 03:00:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:36.805 03:00:15 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.805 03:00:15 -- common/autotest_common.sh@1327 -- # shift 00:15:36.805 03:00:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:36.805 03:00:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:36.805 03:00:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:36.805 03:00:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:36.805 03:00:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:36.805 03:00:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:36.805 03:00:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:36.805 03:00:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:37.064 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:37.064 fio-3.35 00:15:37.064 Starting 1 thread 00:15:39.603 00:15:39.603 test: (groupid=0, jobs=1): err= 0: pid=88018: Tue Apr 23 03:00:18 2024 00:15:39.603 read: IOPS=8253, BW=32.2MiB/s (33.8MB/s)(64.7MiB/2007msec) 00:15:39.603 slat (nsec): min=1958, max=317955, avg=2631.01, stdev=3433.93 00:15:39.603 clat (usec): min=2703, max=13641, avg=8073.41, stdev=529.40 00:15:39.603 lat (usec): min=2751, max=13644, avg=8076.04, stdev=529.13 00:15:39.603 clat percentiles (usec): 00:15:39.603 | 1.00th=[ 6915], 5.00th=[ 7308], 10.00th=[ 7504], 20.00th=[ 7701], 00:15:39.603 | 30.00th=[ 7832], 40.00th=[ 7963], 50.00th=[ 8094], 60.00th=[ 8160], 00:15:39.603 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 8848], 00:15:39.603 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[12387], 99.95th=[13304], 00:15:39.603 | 99.99th=[13566] 00:15:39.603 bw ( KiB/s): min=32040, max=33432, per=99.96%, avg=32998.00, stdev=648.16, samples=4 00:15:39.603 iops : min= 8010, max= 8358, avg=8249.50, stdev=162.04, samples=4 00:15:39.603 write: IOPS=8258, BW=32.3MiB/s (33.8MB/s)(64.7MiB/2007msec); 0 zone resets 00:15:39.603 slat (usec): min=2, max=314, avg= 2.80, stdev= 2.97 00:15:39.603 clat (usec): min=2559, max=12882, avg=7376.16, stdev=488.64 00:15:39.603 lat (usec): min=2576, max=12885, avg=7378.96, stdev=488.56 00:15:39.603 clat percentiles (usec): 00:15:39.603 | 1.00th=[ 6325], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7046], 00:15:39.603 | 30.00th=[ 7177], 40.00th=[ 7242], 50.00th=[ 7373], 60.00th=[ 7504], 00:15:39.603 | 70.00th=[ 7570], 80.00th=[ 7701], 90.00th=[ 7898], 95.00th=[ 8094], 00:15:39.603 | 99.00th=[ 8356], 99.50th=[ 8586], 99.90th=[12125], 99.95th=[12518], 00:15:39.603 | 99.99th=[12911] 00:15:39.603 bw ( KiB/s): min=32704, max=33472, per=99.97%, avg=33026.00, stdev=337.16, samples=4 00:15:39.603 iops : min= 8176, max= 8368, avg=8256.50, stdev=84.29, samples=4 00:15:39.603 lat (msec) : 4=0.07%, 10=99.72%, 20=0.21% 00:15:39.603 cpu : usr=69.94%, sys=21.98%, ctx=24, majf=0, minf=6 00:15:39.603 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:39.603 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:39.603 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:39.603 issued rwts: total=16564,16575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:39.603 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:39.603 00:15:39.603 Run status group 0 (all jobs): 00:15:39.603 READ: bw=32.2MiB/s (33.8MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=64.7MiB (67.8MB), run=2007-2007msec 00:15:39.603 WRITE: bw=32.3MiB/s (33.8MB/s), 32.3MiB/s-32.3MiB/s (33.8MB/s-33.8MB/s), io=64.7MiB (67.9MB), run=2007-2007msec 00:15:39.603 03:00:18 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:39.603 03:00:18 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:39.603 03:00:18 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:39.603 03:00:18 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.603 03:00:18 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:39.603 03:00:18 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.603 03:00:18 -- common/autotest_common.sh@1327 -- # shift 00:15:39.603 03:00:18 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:39.603 03:00:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:39.603 03:00:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:39.603 03:00:18 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:39.603 03:00:18 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:39.603 03:00:18 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:39.603 03:00:18 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:39.603 03:00:18 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:39.603 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:39.603 fio-3.35 00:15:39.603 Starting 1 thread 00:15:42.141 00:15:42.141 test: (groupid=0, jobs=1): err= 0: pid=88067: Tue Apr 23 03:00:20 2024 00:15:42.141 read: IOPS=7843, BW=123MiB/s (129MB/s)(246MiB/2007msec) 00:15:42.141 slat (usec): min=3, max=259, avg= 4.02, stdev= 2.84 00:15:42.141 clat (usec): min=1682, max=17875, avg=8907.83, stdev=2542.36 00:15:42.141 lat (usec): min=1686, max=17878, avg=8911.84, stdev=2542.48 00:15:42.141 clat percentiles (usec): 00:15:42.141 | 1.00th=[ 4228], 5.00th=[ 5276], 10.00th=[ 5735], 20.00th=[ 6587], 00:15:42.141 | 30.00th=[ 7373], 40.00th=[ 8029], 50.00th=[ 8717], 60.00th=[ 9372], 00:15:42.141 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12387], 95.00th=[13435], 00:15:42.141 | 99.00th=[15401], 99.50th=[16188], 99.90th=[16712], 99.95th=[16909], 00:15:42.141 | 99.99th=[17695] 00:15:42.141 bw ( KiB/s): min=59552, max=73600, per=51.88%, avg=65104.00, stdev=6290.39, samples=4 00:15:42.141 iops : min= 3722, max= 4600, avg=4069.00, stdev=393.15, samples=4 00:15:42.141 write: IOPS=4601, BW=71.9MiB/s (75.4MB/s)(133MiB/1856msec); 0 zone resets 00:15:42.141 slat (usec): min=36, max=383, avg=40.58, stdev= 7.09 00:15:42.141 clat (usec): min=6450, max=24657, avg=12856.18, stdev=2469.49 00:15:42.141 lat (usec): min=6488, max=24695, avg=12896.75, stdev=2471.14 00:15:42.141 clat percentiles (usec): 00:15:42.141 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10683], 00:15:42.141 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12649], 60.00th=[13435], 00:15:42.141 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16188], 95.00th=[17171], 00:15:42.141 | 99.00th=[19530], 99.50th=[20055], 99.90th=[22152], 99.95th=[23725], 00:15:42.141 | 99.99th=[24773] 00:15:42.141 bw ( KiB/s): min=61728, max=76352, per=91.99%, avg=67728.00, stdev=6685.60, samples=4 00:15:42.141 iops : min= 3858, max= 4772, avg=4233.00, stdev=417.85, samples=4 00:15:42.141 lat (msec) : 2=0.02%, 4=0.41%, 10=47.91%, 20=51.41%, 50=0.25% 00:15:42.141 cpu : usr=82.40%, sys=13.01%, ctx=11, majf=0, minf=3 00:15:42.141 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:15:42.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.141 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.141 issued rwts: total=15742,8541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.141 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.141 00:15:42.141 Run status group 0 (all jobs): 00:15:42.141 READ: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=246MiB (258MB), run=2007-2007msec 00:15:42.141 WRITE: bw=71.9MiB/s (75.4MB/s), 71.9MiB/s-71.9MiB/s (75.4MB/s-75.4MB/s), io=133MiB (140MB), run=1856-1856msec 00:15:42.141 03:00:20 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:42.141 03:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:20 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:20 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:15:42.141 03:00:20 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:15:42.141 03:00:20 -- host/fio.sh@49 -- # get_nvme_bdfs 00:15:42.141 03:00:20 -- common/autotest_common.sh@1499 -- # bdfs=() 00:15:42.141 03:00:20 -- common/autotest_common.sh@1499 -- # local bdfs 00:15:42.141 03:00:20 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:42.141 03:00:20 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:42.141 03:00:20 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:15:42.141 03:00:20 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:15:42.141 03:00:20 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:15:42.141 03:00:20 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.2 00:15:42.141 03:00:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:20 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 Nvme0n1 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@51 -- # ls_guid=ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7 00:15:42.141 03:00:21 -- host/fio.sh@52 -- # get_lvs_free_mb ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7 00:15:42.141 03:00:21 -- common/autotest_common.sh@1350 -- # local lvs_uuid=ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7 00:15:42.141 03:00:21 -- common/autotest_common.sh@1351 -- # local lvs_info 00:15:42.141 03:00:21 -- common/autotest_common.sh@1352 -- # local fc 00:15:42.141 03:00:21 -- common/autotest_common.sh@1353 -- # local cs 00:15:42.141 03:00:21 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:15:42.141 { 00:15:42.141 "uuid": "ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7", 00:15:42.141 "name": "lvs_0", 00:15:42.141 "base_bdev": "Nvme0n1", 00:15:42.141 "total_data_clusters": 4, 00:15:42.141 "free_clusters": 4, 00:15:42.141 "block_size": 4096, 00:15:42.141 "cluster_size": 1073741824 00:15:42.141 } 00:15:42.141 ]' 00:15:42.141 03:00:21 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7") .free_clusters' 00:15:42.141 03:00:21 -- common/autotest_common.sh@1355 -- # fc=4 00:15:42.141 03:00:21 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7") .cluster_size' 00:15:42.141 4096 00:15:42.141 03:00:21 -- common/autotest_common.sh@1356 -- # cs=1073741824 00:15:42.141 03:00:21 -- common/autotest_common.sh@1359 -- # free_mb=4096 00:15:42.141 03:00:21 -- common/autotest_common.sh@1360 -- # echo 4096 00:15:42.141 03:00:21 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 3b342a7a-4255-4860-b2e3-7cee1abeb765 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:42.141 03:00:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:42.141 03:00:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.141 03:00:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:42.141 03:00:21 -- host/fio.sh@57 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:42.141 03:00:21 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:42.141 03:00:21 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:42.141 03:00:21 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.141 03:00:21 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:42.141 03:00:21 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:42.141 03:00:21 -- common/autotest_common.sh@1327 -- # shift 00:15:42.141 03:00:21 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:42.141 03:00:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:42.141 03:00:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:42.141 03:00:21 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:42.141 03:00:21 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:42.142 03:00:21 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:42.142 03:00:21 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:42.142 03:00:21 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:42.142 03:00:21 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:42.400 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:42.400 fio-3.35 00:15:42.400 Starting 1 thread 00:15:44.960 00:15:44.960 test: (groupid=0, jobs=1): err= 0: pid=88140: Tue Apr 23 03:00:23 2024 00:15:44.960 read: IOPS=6319, BW=24.7MiB/s (25.9MB/s)(49.6MiB/2009msec) 00:15:44.960 slat (usec): min=2, max=312, avg= 2.71, stdev= 3.65 00:15:44.960 clat (usec): min=2879, max=18986, avg=10582.45, stdev=856.56 00:15:44.960 lat (usec): min=2889, max=18989, avg=10585.15, stdev=856.24 00:15:44.960 clat percentiles (usec): 00:15:44.960 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:15:44.960 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:15:44.960 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:15:44.960 | 99.00th=[12387], 99.50th=[12911], 99.90th=[16909], 99.95th=[17957], 00:15:44.960 | 99.99th=[19006] 00:15:44.960 bw ( KiB/s): min=24480, max=25672, per=99.98%, avg=25274.00, stdev=543.35, samples=4 00:15:44.960 iops : min= 6120, max= 6418, avg=6318.50, stdev=135.84, samples=4 00:15:44.960 write: IOPS=6318, BW=24.7MiB/s (25.9MB/s)(49.6MiB/2009msec); 0 zone resets 00:15:44.960 slat (usec): min=2, max=246, avg= 2.84, stdev= 2.64 00:15:44.960 clat (usec): min=2510, max=18116, avg=9600.40, stdev=826.89 00:15:44.960 lat (usec): min=2524, max=18119, avg=9603.24, stdev=826.71 00:15:44.960 clat percentiles (usec): 00:15:44.960 | 1.00th=[ 7898], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:15:44.960 | 30.00th=[ 9241], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9765], 00:15:44.960 | 70.00th=[10028], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:15:44.960 | 99.00th=[11338], 99.50th=[11731], 99.90th=[16712], 99.95th=[17171], 00:15:44.960 | 99.99th=[18220] 00:15:44.960 bw ( KiB/s): min=25064, max=25544, per=99.91%, avg=25250.00, stdev=216.78, samples=4 00:15:44.960 iops : min= 6266, max= 6386, avg=6312.50, stdev=54.19, samples=4 00:15:44.960 lat (msec) : 4=0.07%, 10=46.64%, 20=53.29% 00:15:44.960 cpu : usr=71.86%, sys=21.86%, ctx=15, majf=0, minf=15 00:15:44.960 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:44.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:44.960 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:44.960 issued rwts: total=12696,12693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:44.960 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:44.960 00:15:44.960 Run status group 0 (all jobs): 00:15:44.960 READ: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.6MiB (52.0MB), run=2009-2009msec 00:15:44.960 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.6MiB (52.0MB), run=2009-2009msec 00:15:44.960 03:00:23 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@62 -- # ls_nested_guid=e590250b-2031-4563-b214-a74ce0ab1beb 00:15:44.960 03:00:23 -- host/fio.sh@63 -- # get_lvs_free_mb e590250b-2031-4563-b214-a74ce0ab1beb 00:15:44.960 03:00:23 -- common/autotest_common.sh@1350 -- # local lvs_uuid=e590250b-2031-4563-b214-a74ce0ab1beb 00:15:44.960 03:00:23 -- common/autotest_common.sh@1351 -- # local lvs_info 00:15:44.960 03:00:23 -- common/autotest_common.sh@1352 -- # local fc 00:15:44.960 03:00:23 -- common/autotest_common.sh@1353 -- # local cs 00:15:44.960 03:00:23 -- common/autotest_common.sh@1354 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- common/autotest_common.sh@1354 -- # lvs_info='[ 00:15:44.960 { 00:15:44.960 "uuid": "ff5b6880-0b5a-47b5-b2a0-ac5c77e8e2e7", 00:15:44.960 "name": "lvs_0", 00:15:44.960 "base_bdev": "Nvme0n1", 00:15:44.960 "total_data_clusters": 4, 00:15:44.960 "free_clusters": 0, 00:15:44.960 "block_size": 4096, 00:15:44.960 "cluster_size": 1073741824 00:15:44.960 }, 00:15:44.960 { 00:15:44.960 "uuid": "e590250b-2031-4563-b214-a74ce0ab1beb", 00:15:44.960 "name": "lvs_n_0", 00:15:44.960 "base_bdev": "3b342a7a-4255-4860-b2e3-7cee1abeb765", 00:15:44.960 "total_data_clusters": 1022, 00:15:44.960 "free_clusters": 1022, 00:15:44.960 "block_size": 4096, 00:15:44.960 "cluster_size": 4194304 00:15:44.960 } 00:15:44.960 ]' 00:15:44.960 03:00:23 -- common/autotest_common.sh@1355 -- # jq '.[] | select(.uuid=="e590250b-2031-4563-b214-a74ce0ab1beb") .free_clusters' 00:15:44.960 03:00:23 -- common/autotest_common.sh@1355 -- # fc=1022 00:15:44.960 03:00:23 -- common/autotest_common.sh@1356 -- # jq '.[] | select(.uuid=="e590250b-2031-4563-b214-a74ce0ab1beb") .cluster_size' 00:15:44.960 4088 00:15:44.960 03:00:23 -- common/autotest_common.sh@1356 -- # cs=4194304 00:15:44.960 03:00:23 -- common/autotest_common.sh@1359 -- # free_mb=4088 00:15:44.960 03:00:23 -- common/autotest_common.sh@1360 -- # echo 4088 00:15:44.960 03:00:23 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 c4330912-9665-4372-b40a-6b855e93153f 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:44.960 03:00:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:44.960 03:00:23 -- common/autotest_common.sh@10 -- # set +x 00:15:44.960 03:00:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:44.960 03:00:23 -- host/fio.sh@68 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.960 03:00:23 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.960 03:00:23 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:15:44.960 03:00:23 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:44.960 03:00:23 -- common/autotest_common.sh@1325 -- # local sanitizers 00:15:44.961 03:00:23 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.961 03:00:23 -- common/autotest_common.sh@1327 -- # shift 00:15:44.961 03:00:23 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:15:44.961 03:00:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # grep libasan 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:44.961 03:00:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:44.961 03:00:23 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:15:44.961 03:00:23 -- common/autotest_common.sh@1331 -- # asan_lib= 00:15:44.961 03:00:23 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:15:44.961 03:00:23 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:44.961 03:00:23 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:44.961 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:44.961 fio-3.35 00:15:44.961 Starting 1 thread 00:15:47.503 00:15:47.503 test: (groupid=0, jobs=1): err= 0: pid=88200: Tue Apr 23 03:00:26 2024 00:15:47.503 read: IOPS=5644, BW=22.0MiB/s (23.1MB/s)(44.3MiB/2010msec) 00:15:47.503 slat (usec): min=2, max=352, avg= 2.68, stdev= 4.01 00:15:47.503 clat (usec): min=3239, max=20153, avg=11874.30, stdev=970.37 00:15:47.503 lat (usec): min=3249, max=20156, avg=11876.98, stdev=969.96 00:15:47.503 clat percentiles (usec): 00:15:47.503 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:15:47.503 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:15:47.503 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:15:47.503 | 99.00th=[13960], 99.50th=[14484], 99.90th=[18220], 99.95th=[18482], 00:15:47.503 | 99.99th=[20055] 00:15:47.503 bw ( KiB/s): min=21584, max=23065, per=99.92%, avg=22560.25, stdev=672.85, samples=4 00:15:47.503 iops : min= 5396, max= 5766, avg=5640.00, stdev=168.15, samples=4 00:15:47.503 write: IOPS=5613, BW=21.9MiB/s (23.0MB/s)(44.1MiB/2010msec); 0 zone resets 00:15:47.503 slat (usec): min=2, max=128, avg= 2.80, stdev= 1.65 00:15:47.503 clat (usec): min=2109, max=19851, avg=10752.37, stdev=940.40 00:15:47.503 lat (usec): min=2123, max=19854, avg=10755.17, stdev=940.20 00:15:47.503 clat percentiles (usec): 00:15:47.503 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:15:47.503 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:15:47.503 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:15:47.503 | 99.00th=[12780], 99.50th=[13304], 99.90th=[17957], 99.95th=[18482], 00:15:47.503 | 99.99th=[19792] 00:15:47.503 bw ( KiB/s): min=22208, max=22600, per=99.85%, avg=22422.75, stdev=176.26, samples=4 00:15:47.503 iops : min= 5552, max= 5650, avg=5605.50, stdev=44.16, samples=4 00:15:47.503 lat (msec) : 4=0.06%, 10=9.87%, 20=90.05%, 50=0.01% 00:15:47.503 cpu : usr=70.88%, sys=22.70%, ctx=539, majf=0, minf=15 00:15:47.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:47.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:47.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:47.503 issued rwts: total=11346,11284,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:47.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:47.503 00:15:47.503 Run status group 0 (all jobs): 00:15:47.503 READ: bw=22.0MiB/s (23.1MB/s), 22.0MiB/s-22.0MiB/s (23.1MB/s-23.1MB/s), io=44.3MiB (46.5MB), run=2010-2010msec 00:15:47.503 WRITE: bw=21.9MiB/s (23.0MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=44.1MiB (46.2MB), run=2010-2010msec 00:15:47.503 03:00:26 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 03:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.503 03:00:26 -- host/fio.sh@72 -- # sync 00:15:47.503 03:00:26 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 03:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.503 03:00:26 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 03:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.503 03:00:26 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 03:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.503 03:00:26 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:47.503 03:00:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:47.503 03:00:26 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:15:47.503 03:00:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:47.503 03:00:26 -- common/autotest_common.sh@10 -- # set +x 00:15:48.076 03:00:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:48.076 03:00:27 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:15:48.076 03:00:27 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:15:48.076 03:00:27 -- host/fio.sh@84 -- # nvmftestfini 00:15:48.076 03:00:27 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:48.076 03:00:27 -- nvmf/common.sh@117 -- # sync 00:15:48.076 03:00:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:48.076 03:00:27 -- nvmf/common.sh@120 -- # set +e 00:15:48.076 03:00:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:48.076 03:00:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:48.076 rmmod nvme_tcp 00:15:48.076 rmmod nvme_fabrics 00:15:48.076 rmmod nvme_keyring 00:15:48.076 03:00:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:48.076 03:00:27 -- nvmf/common.sh@124 -- # set -e 00:15:48.076 03:00:27 -- nvmf/common.sh@125 -- # return 0 00:15:48.076 03:00:27 -- nvmf/common.sh@478 -- # '[' -n 87965 ']' 00:15:48.076 03:00:27 -- nvmf/common.sh@479 -- # killprocess 87965 00:15:48.076 03:00:27 -- common/autotest_common.sh@936 -- # '[' -z 87965 ']' 00:15:48.076 03:00:27 -- common/autotest_common.sh@940 -- # kill -0 87965 00:15:48.076 03:00:27 -- common/autotest_common.sh@941 -- # uname 00:15:48.076 03:00:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:48.076 03:00:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87965 00:15:48.076 killing process with pid 87965 00:15:48.076 03:00:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:48.076 03:00:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:48.076 03:00:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87965' 00:15:48.076 03:00:27 -- common/autotest_common.sh@955 -- # kill 87965 00:15:48.076 03:00:27 -- common/autotest_common.sh@960 -- # wait 87965 00:15:48.335 03:00:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:48.335 03:00:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:48.335 03:00:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:48.335 03:00:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.335 03:00:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:48.335 03:00:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.335 03:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.335 03:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.335 03:00:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:15:48.335 00:15:48.335 real 0m12.393s 00:15:48.335 user 0m51.708s 00:15:48.335 sys 0m3.563s 00:15:48.335 03:00:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:48.335 ************************************ 00:15:48.335 END TEST nvmf_fio_host 00:15:48.335 ************************************ 00:15:48.335 03:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.335 03:00:27 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:48.335 03:00:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:48.335 03:00:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.335 03:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.335 ************************************ 00:15:48.335 START TEST nvmf_failover 00:15:48.335 ************************************ 00:15:48.335 03:00:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:48.594 * Looking for test storage... 00:15:48.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:48.594 03:00:27 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:48.594 03:00:27 -- nvmf/common.sh@7 -- # uname -s 00:15:48.594 03:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:48.594 03:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:48.594 03:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:48.594 03:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:48.594 03:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:48.594 03:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:48.594 03:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:48.594 03:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:48.594 03:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:48.594 03:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:48.594 03:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:15:48.594 03:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:15:48.594 03:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:48.594 03:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:48.594 03:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:48.594 03:00:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:48.594 03:00:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:48.594 03:00:27 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:48.594 03:00:27 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:48.594 03:00:27 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:48.594 03:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.594 03:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.594 03:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.594 03:00:27 -- paths/export.sh@5 -- # export PATH 00:15:48.594 03:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:48.594 03:00:27 -- nvmf/common.sh@47 -- # : 0 00:15:48.594 03:00:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:48.594 03:00:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:48.594 03:00:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:48.594 03:00:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:48.594 03:00:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:48.594 03:00:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:48.594 03:00:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:48.594 03:00:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:48.594 03:00:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:48.594 03:00:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:48.594 03:00:27 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:48.594 03:00:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:48.595 03:00:27 -- host/failover.sh@18 -- # nvmftestinit 00:15:48.595 03:00:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:48.595 03:00:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:48.595 03:00:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:48.595 03:00:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:48.595 03:00:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:48.595 03:00:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.595 03:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.595 03:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:48.595 03:00:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:15:48.595 03:00:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:15:48.595 03:00:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:15:48.595 03:00:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:15:48.595 03:00:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:15:48.595 03:00:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:15:48.595 03:00:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:48.595 03:00:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:48.595 03:00:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:48.595 03:00:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:15:48.595 03:00:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:48.595 03:00:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:48.595 03:00:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:48.595 03:00:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:48.595 03:00:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:48.595 03:00:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:48.595 03:00:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:48.595 03:00:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:48.595 03:00:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:15:48.595 03:00:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:15:48.595 Cannot find device "nvmf_tgt_br" 00:15:48.595 03:00:27 -- nvmf/common.sh@155 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:15:48.595 Cannot find device "nvmf_tgt_br2" 00:15:48.595 03:00:27 -- nvmf/common.sh@156 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:15:48.595 03:00:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:15:48.595 Cannot find device "nvmf_tgt_br" 00:15:48.595 03:00:27 -- nvmf/common.sh@158 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:15:48.595 Cannot find device "nvmf_tgt_br2" 00:15:48.595 03:00:27 -- nvmf/common.sh@159 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:15:48.595 03:00:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:15:48.595 03:00:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:48.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.595 03:00:27 -- nvmf/common.sh@162 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:48.595 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:48.595 03:00:27 -- nvmf/common.sh@163 -- # true 00:15:48.595 03:00:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:15:48.595 03:00:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:48.595 03:00:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:48.595 03:00:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:48.595 03:00:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:48.595 03:00:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:48.854 03:00:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:48.854 03:00:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:48.854 03:00:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:48.854 03:00:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:15:48.854 03:00:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:15:48.854 03:00:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:15:48.854 03:00:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:15:48.854 03:00:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:48.854 03:00:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:48.854 03:00:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:48.854 03:00:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:15:48.854 03:00:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:15:48.854 03:00:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:15:48.854 03:00:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:48.854 03:00:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:48.854 03:00:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:48.854 03:00:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:48.854 03:00:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:15:48.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:48.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:48.854 00:15:48.854 --- 10.0.0.2 ping statistics --- 00:15:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.854 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:48.854 03:00:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:15:48.854 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:48.854 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:15:48.854 00:15:48.854 --- 10.0.0.3 ping statistics --- 00:15:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.854 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:15:48.854 03:00:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:48.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:48.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:15:48.854 00:15:48.854 --- 10.0.0.1 ping statistics --- 00:15:48.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:48.854 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:15:48.854 03:00:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:48.854 03:00:27 -- nvmf/common.sh@422 -- # return 0 00:15:48.854 03:00:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:48.854 03:00:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:48.854 03:00:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:48.854 03:00:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:48.854 03:00:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:48.854 03:00:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:48.854 03:00:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:48.854 03:00:27 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:48.854 03:00:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:48.854 03:00:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:48.854 03:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.854 03:00:27 -- nvmf/common.sh@470 -- # nvmfpid=88426 00:15:48.854 03:00:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:48.854 03:00:27 -- nvmf/common.sh@471 -- # waitforlisten 88426 00:15:48.854 03:00:27 -- common/autotest_common.sh@817 -- # '[' -z 88426 ']' 00:15:48.854 03:00:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:48.854 03:00:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:48.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:48.854 03:00:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:48.854 03:00:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:48.854 03:00:27 -- common/autotest_common.sh@10 -- # set +x 00:15:48.854 [2024-04-23 03:00:27.977660] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:15:48.854 [2024-04-23 03:00:27.977751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.113 [2024-04-23 03:00:28.102508] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:15:49.113 [2024-04-23 03:00:28.122085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:49.113 [2024-04-23 03:00:28.165813] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:49.113 [2024-04-23 03:00:28.166105] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:49.113 [2024-04-23 03:00:28.166389] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:49.113 [2024-04-23 03:00:28.166672] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:49.113 [2024-04-23 03:00:28.166871] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:49.113 [2024-04-23 03:00:28.167256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:49.113 [2024-04-23 03:00:28.167372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:49.113 [2024-04-23 03:00:28.167390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.372 03:00:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:49.372 03:00:28 -- common/autotest_common.sh@850 -- # return 0 00:15:49.372 03:00:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:49.372 03:00:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:49.372 03:00:28 -- common/autotest_common.sh@10 -- # set +x 00:15:49.372 03:00:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:49.372 03:00:28 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:49.631 [2024-04-23 03:00:28.579005] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:49.631 03:00:28 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:49.892 Malloc0 00:15:49.892 03:00:28 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:50.151 03:00:29 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:50.410 03:00:29 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.669 [2024-04-23 03:00:29.669544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:50.669 03:00:29 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:50.928 [2024-04-23 03:00:29.897724] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:50.928 03:00:29 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:51.187 [2024-04-23 03:00:30.126055] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:51.187 03:00:30 -- host/failover.sh@31 -- # bdevperf_pid=88482 00:15:51.187 03:00:30 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:51.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:51.187 03:00:30 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:51.187 03:00:30 -- host/failover.sh@34 -- # waitforlisten 88482 /var/tmp/bdevperf.sock 00:15:51.187 03:00:30 -- common/autotest_common.sh@817 -- # '[' -z 88482 ']' 00:15:51.187 03:00:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:51.187 03:00:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:51.187 03:00:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:51.187 03:00:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:51.187 03:00:30 -- common/autotest_common.sh@10 -- # set +x 00:15:52.122 03:00:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:52.122 03:00:31 -- common/autotest_common.sh@850 -- # return 0 00:15:52.122 03:00:31 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.381 NVMe0n1 00:15:52.381 03:00:31 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:52.639 00:15:52.896 03:00:31 -- host/failover.sh@39 -- # run_test_pid=88500 00:15:52.896 03:00:31 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:52.896 03:00:31 -- host/failover.sh@41 -- # sleep 1 00:15:53.831 03:00:32 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:54.089 [2024-04-23 03:00:33.042490] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042641] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042655] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 [2024-04-23 03:00:33.042668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d93ef0 is same with the state(5) to be set 00:15:54.089 03:00:33 -- host/failover.sh@45 -- # sleep 3 00:15:57.397 03:00:36 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:57.397 00:15:57.397 03:00:36 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:57.655 [2024-04-23 03:00:36.649298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649368] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649377] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649385] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649394] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 [2024-04-23 03:00:36.649402] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d945f0 is same with the state(5) to be set 00:15:57.655 03:00:36 -- host/failover.sh@50 -- # sleep 3 00:16:00.945 03:00:39 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:00.945 [2024-04-23 03:00:39.937561] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:00.945 03:00:39 -- host/failover.sh@55 -- # sleep 1 00:16:01.878 03:00:40 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:02.135 03:00:41 -- host/failover.sh@59 -- # wait 88500 00:16:08.702 0 00:16:08.702 03:00:46 -- host/failover.sh@61 -- # killprocess 88482 00:16:08.702 03:00:46 -- common/autotest_common.sh@936 -- # '[' -z 88482 ']' 00:16:08.702 03:00:46 -- common/autotest_common.sh@940 -- # kill -0 88482 00:16:08.702 03:00:46 -- common/autotest_common.sh@941 -- # uname 00:16:08.702 03:00:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:08.702 03:00:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88482 00:16:08.702 killing process with pid 88482 00:16:08.702 03:00:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:08.702 03:00:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:08.702 03:00:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88482' 00:16:08.702 03:00:46 -- common/autotest_common.sh@955 -- # kill 88482 00:16:08.702 03:00:46 -- common/autotest_common.sh@960 -- # wait 88482 00:16:08.702 03:00:47 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:08.702 [2024-04-23 03:00:30.198985] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:08.702 [2024-04-23 03:00:30.199096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88482 ] 00:16:08.702 [2024-04-23 03:00:30.322744] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:08.702 [2024-04-23 03:00:30.334803] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.702 [2024-04-23 03:00:30.369656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.702 Running I/O for 15 seconds... 00:16:08.702 [2024-04-23 03:00:33.042750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.042976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.042992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:81424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.043006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.043021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.702 [2024-04-23 03:00:33.043035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.043051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:81888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.702 [2024-04-23 03:00:33.043065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.702 [2024-04-23 03:00:33.043080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:81896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:81920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:81928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:81960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:81976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:81984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.043790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.043983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.043998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.044029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.703 [2024-04-23 03:00:33.044066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.703 [2024-04-23 03:00:33.044500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.703 [2024-04-23 03:00:33.044539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.044939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.044985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.044999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.704 [2024-04-23 03:00:33.045611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.704 [2024-04-23 03:00:33.045857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.704 [2024-04-23 03:00:33.045871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.045887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.045901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.045916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.045930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.045945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.045958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.045974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.045987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:82392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.705 [2024-04-23 03:00:33.046705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.046983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.046998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.047012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.047027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.047041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.705 [2024-04-23 03:00:33.047056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.705 [2024-04-23 03:00:33.047070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:33.047107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:33.047152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:33.047182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a6280 is same with the state(5) to be set 00:16:08.706 [2024-04-23 03:00:33.047224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:08.706 [2024-04-23 03:00:33.047237] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:08.706 [2024-04-23 03:00:33.047249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81880 len:8 PRP1 0x0 PRP2 0x0 00:16:08.706 [2024-04-23 03:00:33.047262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047307] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a6280 was disconnected and freed. reset controller. 00:16:08.706 [2024-04-23 03:00:33.047325] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:08.706 [2024-04-23 03:00:33.047377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.706 [2024-04-23 03:00:33.047398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.706 [2024-04-23 03:00:33.047457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.706 [2024-04-23 03:00:33.047487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.706 [2024-04-23 03:00:33.047518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:33.047533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:08.706 [2024-04-23 03:00:33.051722] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:08.706 [2024-04-23 03:00:33.051765] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19870d0 (9): Bad file descriptor 00:16:08.706 [2024-04-23 03:00:33.084740] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:08.706 [2024-04-23 03:00:36.649616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.649952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.649968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.649984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.706 [2024-04-23 03:00:36.650501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.706 [2024-04-23 03:00:36.650604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.706 [2024-04-23 03:00:36.650622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.650637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.650669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.650701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.650732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.650764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.650973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.650988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.707 [2024-04-23 03:00:36.651297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.707 [2024-04-23 03:00:36.651663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.707 [2024-04-23 03:00:36.651678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.651710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.651743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.651775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.651807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.651838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.651877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.651910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.651942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.651974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.651991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.708 [2024-04-23 03:00:36.652922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.652971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.652986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.708 [2024-04-23 03:00:36.653002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.708 [2024-04-23 03:00:36.653017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.709 [2024-04-23 03:00:36.653739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.653949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.709 [2024-04-23 03:00:36.653964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:08.709 [2024-04-23 03:00:36.654032] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:08.709 [2024-04-23 03:00:36.654046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79744 len:8 PRP1 0x0 PRP2 0x0 00:16:08.709 [2024-04-23 03:00:36.654060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654109] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a6b90 was disconnected and freed. reset controller. 00:16:08.709 [2024-04-23 03:00:36.654142] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:08.709 [2024-04-23 03:00:36.654201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:36.654224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:36.654256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:36.654297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:36.654327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:36.654342] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:08.709 [2024-04-23 03:00:36.658313] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:08.709 [2024-04-23 03:00:36.658355] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19870d0 (9): Bad file descriptor 00:16:08.709 [2024-04-23 03:00:36.691703] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:08.709 [2024-04-23 03:00:41.222177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:41.222238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:41.222292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:41.222308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.709 [2024-04-23 03:00:41.222323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.709 [2024-04-23 03:00:41.222339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:08.710 [2024-04-23 03:00:41.222399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222414] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19870d0 is same with the state(5) to be set 00:16:08.710 [2024-04-23 03:00:41.222508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.222820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.222852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.222884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.222916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.222947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.222964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.222982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.710 [2024-04-23 03:00:41.223655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.710 [2024-04-23 03:00:41.223867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.710 [2024-04-23 03:00:41.223899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.223922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.223939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.223955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.223972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.223987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.711 [2024-04-23 03:00:41.224805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.224973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.224990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.225005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.225022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.225037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.711 [2024-04-23 03:00:41.225054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.711 [2024-04-23 03:00:41.225069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.225614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.225979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.225996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.712 [2024-04-23 03:00:41.226207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.712 [2024-04-23 03:00:41.226493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.712 [2024-04-23 03:00:41.226508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:08.713 [2024-04-23 03:00:41.226772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.226973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.226990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:08.713 [2024-04-23 03:00:41.227005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.227065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:08.713 [2024-04-23 03:00:41.227081] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:08.713 [2024-04-23 03:00:41.227094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:816 len:8 PRP1 0x0 PRP2 0x0 00:16:08.713 [2024-04-23 03:00:41.227108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:08.713 [2024-04-23 03:00:41.227171] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19abba0 was disconnected and freed. reset controller. 00:16:08.713 [2024-04-23 03:00:41.227191] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:08.713 [2024-04-23 03:00:41.227219] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:08.713 [2024-04-23 03:00:41.231473] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:08.713 [2024-04-23 03:00:41.231518] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19870d0 (9): Bad file descriptor 00:16:08.713 [2024-04-23 03:00:41.267665] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:08.713 00:16:08.713 Latency(us) 00:16:08.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.713 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:08.713 Verification LBA range: start 0x0 length 0x4000 00:16:08.713 NVMe0n1 : 15.01 8571.49 33.48 193.29 0.00 14569.81 670.25 17635.14 00:16:08.713 =================================================================================================================== 00:16:08.713 Total : 8571.49 33.48 193.29 0.00 14569.81 670.25 17635.14 00:16:08.713 Received shutdown signal, test time was about 15.000000 seconds 00:16:08.713 00:16:08.713 Latency(us) 00:16:08.713 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:08.713 =================================================================================================================== 00:16:08.713 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:08.713 03:00:47 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:08.713 03:00:47 -- host/failover.sh@65 -- # count=3 00:16:08.713 03:00:47 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:08.713 03:00:47 -- host/failover.sh@73 -- # bdevperf_pid=88676 00:16:08.713 03:00:47 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:08.713 03:00:47 -- host/failover.sh@75 -- # waitforlisten 88676 /var/tmp/bdevperf.sock 00:16:08.713 03:00:47 -- common/autotest_common.sh@817 -- # '[' -z 88676 ']' 00:16:08.713 03:00:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:08.713 03:00:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:08.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:08.713 03:00:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:08.713 03:00:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:08.713 03:00:47 -- common/autotest_common.sh@10 -- # set +x 00:16:08.713 03:00:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:08.713 03:00:47 -- common/autotest_common.sh@850 -- # return 0 00:16:08.713 03:00:47 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:08.713 [2024-04-23 03:00:47.599685] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:08.713 03:00:47 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:08.713 [2024-04-23 03:00:47.819877] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:08.713 03:00:47 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:09.284 NVMe0n1 00:16:09.284 03:00:48 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:09.542 00:16:09.542 03:00:48 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:09.800 00:16:09.800 03:00:48 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:09.800 03:00:48 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:10.059 03:00:49 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.318 03:00:49 -- host/failover.sh@87 -- # sleep 3 00:16:13.602 03:00:52 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:13.602 03:00:52 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:13.602 03:00:52 -- host/failover.sh@90 -- # run_test_pid=88747 00:16:13.602 03:00:52 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:13.602 03:00:52 -- host/failover.sh@92 -- # wait 88747 00:16:14.537 0 00:16:14.537 03:00:53 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:14.537 [2024-04-23 03:00:47.145105] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:14.537 [2024-04-23 03:00:47.145217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88676 ] 00:16:14.537 [2024-04-23 03:00:47.266853] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:14.537 [2024-04-23 03:00:47.287473] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.537 [2024-04-23 03:00:47.325373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.537 [2024-04-23 03:00:49.254103] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:14.538 [2024-04-23 03:00:49.254222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.538 [2024-04-23 03:00:49.254247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.538 [2024-04-23 03:00:49.254265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.538 [2024-04-23 03:00:49.254279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.538 [2024-04-23 03:00:49.254293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.538 [2024-04-23 03:00:49.254306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.538 [2024-04-23 03:00:49.254320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:14.538 [2024-04-23 03:00:49.254333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:14.538 [2024-04-23 03:00:49.254347] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:14.538 [2024-04-23 03:00:49.254395] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:14.538 [2024-04-23 03:00:49.254424] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa450d0 (9): Bad file descriptor 00:16:14.538 [2024-04-23 03:00:49.259735] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:14.538 Running I/O for 1 seconds... 00:16:14.538 00:16:14.538 Latency(us) 00:16:14.538 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.538 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:14.538 Verification LBA range: start 0x0 length 0x4000 00:16:14.538 NVMe0n1 : 1.01 7096.16 27.72 0.00 0.00 17965.82 2278.87 14775.39 00:16:14.538 =================================================================================================================== 00:16:14.538 Total : 7096.16 27.72 0.00 0.00 17965.82 2278.87 14775.39 00:16:14.538 03:00:53 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:14.538 03:00:53 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:14.806 03:00:53 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.065 03:00:54 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.065 03:00:54 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:15.325 03:00:54 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:15.585 03:00:54 -- host/failover.sh@101 -- # sleep 3 00:16:18.870 03:00:57 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:18.870 03:00:57 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:18.870 03:00:57 -- host/failover.sh@108 -- # killprocess 88676 00:16:18.870 03:00:57 -- common/autotest_common.sh@936 -- # '[' -z 88676 ']' 00:16:18.870 03:00:57 -- common/autotest_common.sh@940 -- # kill -0 88676 00:16:18.870 03:00:57 -- common/autotest_common.sh@941 -- # uname 00:16:18.870 03:00:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:18.870 03:00:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88676 00:16:18.870 killing process with pid 88676 00:16:18.870 03:00:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:18.870 03:00:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:18.870 03:00:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88676' 00:16:18.870 03:00:57 -- common/autotest_common.sh@955 -- # kill 88676 00:16:18.870 03:00:57 -- common/autotest_common.sh@960 -- # wait 88676 00:16:19.128 03:00:58 -- host/failover.sh@110 -- # sync 00:16:19.128 03:00:58 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:19.386 03:00:58 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:19.386 03:00:58 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:19.386 03:00:58 -- host/failover.sh@116 -- # nvmftestfini 00:16:19.386 03:00:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:19.386 03:00:58 -- nvmf/common.sh@117 -- # sync 00:16:19.386 03:00:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.386 03:00:58 -- nvmf/common.sh@120 -- # set +e 00:16:19.386 03:00:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.386 03:00:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.386 rmmod nvme_tcp 00:16:19.386 rmmod nvme_fabrics 00:16:19.386 rmmod nvme_keyring 00:16:19.386 03:00:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.386 03:00:58 -- nvmf/common.sh@124 -- # set -e 00:16:19.386 03:00:58 -- nvmf/common.sh@125 -- # return 0 00:16:19.386 03:00:58 -- nvmf/common.sh@478 -- # '[' -n 88426 ']' 00:16:19.386 03:00:58 -- nvmf/common.sh@479 -- # killprocess 88426 00:16:19.386 03:00:58 -- common/autotest_common.sh@936 -- # '[' -z 88426 ']' 00:16:19.386 03:00:58 -- common/autotest_common.sh@940 -- # kill -0 88426 00:16:19.386 03:00:58 -- common/autotest_common.sh@941 -- # uname 00:16:19.386 03:00:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:19.386 03:00:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88426 00:16:19.386 killing process with pid 88426 00:16:19.386 03:00:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:19.386 03:00:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:19.386 03:00:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88426' 00:16:19.386 03:00:58 -- common/autotest_common.sh@955 -- # kill 88426 00:16:19.386 03:00:58 -- common/autotest_common.sh@960 -- # wait 88426 00:16:19.644 03:00:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:19.644 03:00:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:19.644 03:00:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:19.644 03:00:58 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.644 03:00:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:19.644 03:00:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.644 03:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.644 03:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.644 03:00:58 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:19.644 00:16:19.644 real 0m31.231s 00:16:19.644 user 2m1.440s 00:16:19.644 sys 0m5.290s 00:16:19.644 03:00:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.644 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 ************************************ 00:16:19.644 END TEST nvmf_failover 00:16:19.644 ************************************ 00:16:19.644 03:00:58 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:19.644 03:00:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:19.644 03:00:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:19.644 03:00:58 -- common/autotest_common.sh@10 -- # set +x 00:16:19.644 ************************************ 00:16:19.644 START TEST nvmf_discovery 00:16:19.644 ************************************ 00:16:19.644 03:00:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:19.902 * Looking for test storage... 00:16:19.903 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:19.903 03:00:58 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:19.903 03:00:58 -- nvmf/common.sh@7 -- # uname -s 00:16:19.903 03:00:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.903 03:00:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.903 03:00:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.903 03:00:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.903 03:00:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.903 03:00:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.903 03:00:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.903 03:00:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.903 03:00:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.903 03:00:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:19.903 03:00:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:19.903 03:00:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.903 03:00:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.903 03:00:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:19.903 03:00:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.903 03:00:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:19.903 03:00:58 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.903 03:00:58 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.903 03:00:58 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.903 03:00:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.903 03:00:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.903 03:00:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.903 03:00:58 -- paths/export.sh@5 -- # export PATH 00:16:19.903 03:00:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.903 03:00:58 -- nvmf/common.sh@47 -- # : 0 00:16:19.903 03:00:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.903 03:00:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.903 03:00:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.903 03:00:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.903 03:00:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.903 03:00:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.903 03:00:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.903 03:00:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.903 03:00:58 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:19.903 03:00:58 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:19.903 03:00:58 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:19.903 03:00:58 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:19.903 03:00:58 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:19.903 03:00:58 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:19.903 03:00:58 -- host/discovery.sh@25 -- # nvmftestinit 00:16:19.903 03:00:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:19.903 03:00:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.903 03:00:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:19.903 03:00:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:19.903 03:00:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:19.903 03:00:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.903 03:00:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.903 03:00:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.903 03:00:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:19.903 03:00:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:19.903 03:00:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:19.903 03:00:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:19.903 03:00:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:19.903 03:00:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:19.903 03:00:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:19.903 03:00:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:19.903 03:00:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:19.903 03:00:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:19.903 03:00:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:19.903 03:00:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:19.903 03:00:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:19.903 03:00:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:19.903 03:00:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:19.903 03:00:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:19.903 Cannot find device "nvmf_tgt_br" 00:16:19.903 03:00:58 -- nvmf/common.sh@155 -- # true 00:16:19.903 03:00:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:19.903 Cannot find device "nvmf_tgt_br2" 00:16:19.903 03:00:58 -- nvmf/common.sh@156 -- # true 00:16:19.903 03:00:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:19.903 03:00:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:19.903 Cannot find device "nvmf_tgt_br" 00:16:19.903 03:00:58 -- nvmf/common.sh@158 -- # true 00:16:19.903 03:00:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:19.903 Cannot find device "nvmf_tgt_br2" 00:16:19.903 03:00:58 -- nvmf/common.sh@159 -- # true 00:16:19.903 03:00:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:19.903 03:00:59 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:19.903 03:00:59 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:19.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.903 03:00:59 -- nvmf/common.sh@162 -- # true 00:16:19.903 03:00:59 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:19.903 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:19.903 03:00:59 -- nvmf/common.sh@163 -- # true 00:16:19.903 03:00:59 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:19.903 03:00:59 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:19.903 03:00:59 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:19.903 03:00:59 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:20.162 03:00:59 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:20.162 03:00:59 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:20.162 03:00:59 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:20.162 03:00:59 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:20.162 03:00:59 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:20.162 03:00:59 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:20.162 03:00:59 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:20.162 03:00:59 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:20.162 03:00:59 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:20.162 03:00:59 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:20.162 03:00:59 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:20.162 03:00:59 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:20.162 03:00:59 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:20.162 03:00:59 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:20.162 03:00:59 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:20.162 03:00:59 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:20.162 03:00:59 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:20.162 03:00:59 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:20.162 03:00:59 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:20.162 03:00:59 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:20.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:16:20.162 00:16:20.162 --- 10.0.0.2 ping statistics --- 00:16:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.162 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:16:20.162 03:00:59 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:20.162 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:20.162 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:16:20.162 00:16:20.162 --- 10.0.0.3 ping statistics --- 00:16:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.162 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:20.162 03:00:59 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:20.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:20.162 00:16:20.162 --- 10.0.0.1 ping statistics --- 00:16:20.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.162 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:20.162 03:00:59 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.162 03:00:59 -- nvmf/common.sh@422 -- # return 0 00:16:20.162 03:00:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:20.162 03:00:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.162 03:00:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:20.162 03:00:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:20.162 03:00:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.162 03:00:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:20.162 03:00:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:20.162 03:00:59 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:20.162 03:00:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:20.162 03:00:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:20.162 03:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 03:00:59 -- nvmf/common.sh@470 -- # nvmfpid=89027 00:16:20.162 03:00:59 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:20.162 03:00:59 -- nvmf/common.sh@471 -- # waitforlisten 89027 00:16:20.162 03:00:59 -- common/autotest_common.sh@817 -- # '[' -z 89027 ']' 00:16:20.162 03:00:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.162 03:00:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:20.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.162 03:00:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.162 03:00:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:20.162 03:00:59 -- common/autotest_common.sh@10 -- # set +x 00:16:20.162 [2024-04-23 03:00:59.301891] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:20.162 [2024-04-23 03:00:59.301995] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.421 [2024-04-23 03:00:59.424074] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:20.421 [2024-04-23 03:00:59.441374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:20.421 [2024-04-23 03:00:59.475807] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.421 [2024-04-23 03:00:59.475880] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.421 [2024-04-23 03:00:59.475906] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.421 [2024-04-23 03:00:59.475914] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.421 [2024-04-23 03:00:59.475920] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.421 [2024-04-23 03:00:59.475950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.366 03:01:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:21.366 03:01:00 -- common/autotest_common.sh@850 -- # return 0 00:16:21.366 03:01:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:21.366 03:01:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 03:01:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:21.366 03:01:00 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:21.366 03:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 [2024-04-23 03:01:00.280063] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:21.366 03:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.366 03:01:00 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:21.366 03:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 [2024-04-23 03:01:00.288179] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:21.366 03:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.366 03:01:00 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:21.366 03:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 null0 00:16:21.366 03:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.366 03:01:00 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:21.366 03:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 null1 00:16:21.366 03:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.366 03:01:00 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:21.366 03:01:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 03:01:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:21.366 03:01:00 -- host/discovery.sh@45 -- # hostpid=89059 00:16:21.366 03:01:00 -- host/discovery.sh@46 -- # waitforlisten 89059 /tmp/host.sock 00:16:21.366 03:01:00 -- common/autotest_common.sh@817 -- # '[' -z 89059 ']' 00:16:21.366 03:01:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:16:21.366 03:01:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:21.366 03:01:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:21.366 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:21.366 03:01:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:21.366 03:01:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.366 03:01:00 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:21.366 [2024-04-23 03:01:00.376957] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:21.366 [2024-04-23 03:01:00.377058] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89059 ] 00:16:21.366 [2024-04-23 03:01:00.501242] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:21.366 [2024-04-23 03:01:00.522702] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.624 [2024-04-23 03:01:00.562233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:22.561 03:01:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:22.561 03:01:01 -- common/autotest_common.sh@850 -- # return 0 00:16:22.561 03:01:01 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:22.561 03:01:01 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@72 -- # notify_id=0 00:16:22.561 03:01:01 -- host/discovery.sh@83 -- # get_subsystem_names 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # sort 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # xargs 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:22.561 03:01:01 -- host/discovery.sh@84 -- # get_bdev_list 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # sort 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # xargs 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:16:22.561 03:01:01 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@87 -- # get_subsystem_names 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # sort 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # xargs 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:22.561 03:01:01 -- host/discovery.sh@88 -- # get_bdev_list 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # sort 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # xargs 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:16:22.561 03:01:01 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@91 -- # get_subsystem_names 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # sort 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- host/discovery.sh@59 -- # xargs 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.561 03:01:01 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:16:22.561 03:01:01 -- host/discovery.sh@92 -- # get_bdev_list 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.561 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.561 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # sort 00:16:22.561 03:01:01 -- host/discovery.sh@55 -- # xargs 00:16:22.561 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:22.828 03:01:01 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 [2024-04-23 03:01:01.748745] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@97 -- # get_subsystem_names 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # sort 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # xargs 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:16:22.828 03:01:01 -- host/discovery.sh@98 -- # get_bdev_list 00:16:22.828 03:01:01 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:01:01 -- host/discovery.sh@55 -- # sort 00:16:22.828 03:01:01 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.828 03:01:01 -- host/discovery.sh@55 -- # xargs 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:16:22.828 03:01:01 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:16:22.828 03:01:01 -- host/discovery.sh@79 -- # expected_count=0 00:16:22.828 03:01:01 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:22.828 03:01:01 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:22.828 03:01:01 -- common/autotest_common.sh@901 -- # local max=10 00:16:22.828 03:01:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:22.828 03:01:01 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:22.828 03:01:01 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:22.828 03:01:01 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:22.828 03:01:01 -- host/discovery.sh@74 -- # jq '. | length' 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@74 -- # notification_count=0 00:16:22.828 03:01:01 -- host/discovery.sh@75 -- # notify_id=0 00:16:22.828 03:01:01 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:22.828 03:01:01 -- common/autotest_common.sh@904 -- # return 0 00:16:22.828 03:01:01 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:22.828 03:01:01 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:22.828 03:01:01 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:22.828 03:01:01 -- common/autotest_common.sh@901 -- # local max=10 00:16:22.828 03:01:01 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:22.828 03:01:01 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:22.828 03:01:01 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:22.828 03:01:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:22.828 03:01:01 -- common/autotest_common.sh@10 -- # set +x 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # sort 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # xargs 00:16:22.828 03:01:01 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:22.828 03:01:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.085 03:01:01 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:16:23.085 03:01:01 -- common/autotest_common.sh@906 -- # sleep 1 00:16:23.343 [2024-04-23 03:01:02.382957] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:23.343 [2024-04-23 03:01:02.382994] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:23.343 [2024-04-23 03:01:02.383054] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:23.343 [2024-04-23 03:01:02.389002] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:23.343 [2024-04-23 03:01:02.444785] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:23.343 [2024-04-23 03:01:02.444813] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:23.909 03:01:02 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.909 03:01:02 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:23.909 03:01:02 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:23.909 03:01:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.909 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.909 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:23.909 03:01:02 -- host/discovery.sh@59 -- # sort 00:16:23.909 03:01:03 -- host/discovery.sh@59 -- # xargs 00:16:23.909 03:01:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.909 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:23.909 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.909 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:23.909 03:01:03 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:23.909 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:16:23.909 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:23.909 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:23.909 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:16:23.909 03:01:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:23.909 03:01:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.909 03:01:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.909 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:23.909 03:01:03 -- host/discovery.sh@55 -- # sort 00:16:23.909 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:23.909 03:01:03 -- host/discovery.sh@55 -- # xargs 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.169 03:01:03 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.169 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:24.169 03:01:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:24.169 03:01:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:24.169 03:01:03 -- host/discovery.sh@63 -- # sort -n 00:16:24.169 03:01:03 -- host/discovery.sh@63 -- # xargs 00:16:24.169 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.169 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.169 03:01:03 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:16:24.169 03:01:03 -- host/discovery.sh@79 -- # expected_count=1 00:16:24.169 03:01:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.169 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:24.169 03:01:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:24.169 03:01:03 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.169 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.169 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.169 03:01:03 -- host/discovery.sh@74 -- # notification_count=1 00:16:24.169 03:01:03 -- host/discovery.sh@75 -- # notify_id=1 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:24.169 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.169 03:01:03 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:24.169 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.169 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.169 03:01:03 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.169 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:24.169 03:01:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.169 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.169 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.169 03:01:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.169 03:01:03 -- host/discovery.sh@55 -- # sort 00:16:24.169 03:01:03 -- host/discovery.sh@55 -- # xargs 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:24.169 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.169 03:01:03 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:16:24.169 03:01:03 -- host/discovery.sh@79 -- # expected_count=1 00:16:24.169 03:01:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.169 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:24.169 03:01:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:24.169 03:01:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:24.169 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.169 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.169 03:01:03 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.169 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.428 03:01:03 -- host/discovery.sh@74 -- # notification_count=1 00:16:24.428 03:01:03 -- host/discovery.sh@75 -- # notify_id=2 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:24.428 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.428 03:01:03 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:24.428 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.428 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.428 [2024-04-23 03:01:03.346031] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:24.428 [2024-04-23 03:01:03.347015] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:24.428 [2024-04-23 03:01:03.347063] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:24.428 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.428 03:01:03 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.428 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:24.428 [2024-04-23 03:01:03.352994] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:24.428 03:01:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.428 03:01:03 -- host/discovery.sh@59 -- # sort 00:16:24.428 03:01:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.428 03:01:03 -- host/discovery.sh@59 -- # xargs 00:16:24.428 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.428 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.428 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.428 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.428 03:01:03 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.428 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:24.428 03:01:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.428 03:01:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.428 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.428 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.428 03:01:03 -- host/discovery.sh@55 -- # xargs 00:16:24.428 03:01:03 -- host/discovery.sh@55 -- # sort 00:16:24.428 [2024-04-23 03:01:03.415292] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:24.428 [2024-04-23 03:01:03.415318] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:24.428 [2024-04-23 03:01:03.415326] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:24.428 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:24.428 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.428 03:01:03 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.428 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:16:24.428 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:24.429 03:01:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:24.429 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.429 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 03:01:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:24.429 03:01:03 -- host/discovery.sh@63 -- # sort -n 00:16:24.429 03:01:03 -- host/discovery.sh@63 -- # xargs 00:16:24.429 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:24.429 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.429 03:01:03 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:16:24.429 03:01:03 -- host/discovery.sh@79 -- # expected_count=0 00:16:24.429 03:01:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:24.429 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:24.429 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.429 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:24.429 03:01:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:24.429 03:01:03 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.429 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.429 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.429 03:01:03 -- host/discovery.sh@74 -- # notification_count=0 00:16:24.429 03:01:03 -- host/discovery.sh@75 -- # notify_id=2 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:24.429 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.429 03:01:03 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:24.429 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.429 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 [2024-04-23 03:01:03.575361] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:24.429 [2024-04-23 03:01:03.575397] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:24.429 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.429 03:01:03 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:24.429 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:16:24.429 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.429 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:16:24.429 [2024-04-23 03:01:03.581361] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:24.429 [2024-04-23 03:01:03.581394] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:24.429 [2024-04-23 03:01:03.581561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.429 [2024-04-23 03:01:03.581595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.429 [2024-04-23 03:01:03.581624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.429 [2024-04-23 03:01:03.581634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.429 [2024-04-23 03:01:03.581644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.429 [2024-04-23 03:01:03.581652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.429 [2024-04-23 03:01:03.581662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:24.429 [2024-04-23 03:01:03.581671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.429 [2024-04-23 03:01:03.581679] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15e84d0 is same with the state(5) to be set 00:16:24.429 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:24.429 03:01:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.429 03:01:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.429 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.429 03:01:03 -- host/discovery.sh@59 -- # sort 00:16:24.429 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.429 03:01:03 -- host/discovery.sh@59 -- # xargs 00:16:24.686 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.686 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.686 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.686 03:01:03 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.686 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:16:24.686 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.686 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.686 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:16:24.686 03:01:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:24.687 03:01:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.687 03:01:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.687 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.687 03:01:03 -- host/discovery.sh@55 -- # sort 00:16:24.687 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.687 03:01:03 -- host/discovery.sh@55 -- # xargs 00:16:24.687 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:24.687 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.687 03:01:03 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.687 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:16:24.687 03:01:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:24.687 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.687 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.687 03:01:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:24.687 03:01:03 -- host/discovery.sh@63 -- # xargs 00:16:24.687 03:01:03 -- host/discovery.sh@63 -- # sort -n 00:16:24.687 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:16:24.687 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.687 03:01:03 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:16:24.687 03:01:03 -- host/discovery.sh@79 -- # expected_count=0 00:16:24.687 03:01:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:24.687 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:24.687 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.687 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:24.687 03:01:03 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.687 03:01:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:24.687 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.687 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.687 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.687 03:01:03 -- host/discovery.sh@74 -- # notification_count=0 00:16:24.687 03:01:03 -- host/discovery.sh@75 -- # notify_id=2 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:24.687 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.687 03:01:03 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:24.687 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.687 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.687 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.687 03:01:03 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.687 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:16:24.687 03:01:03 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:16:24.687 03:01:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.687 03:01:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.687 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.687 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.687 03:01:03 -- host/discovery.sh@59 -- # sort 00:16:24.687 03:01:03 -- host/discovery.sh@59 -- # xargs 00:16:24.687 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.944 03:01:03 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:16:24.944 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.944 03:01:03 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:16:24.944 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:16:24.944 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.944 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.944 03:01:03 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:16:24.944 03:01:03 -- common/autotest_common.sh@903 -- # get_bdev_list 00:16:24.944 03:01:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.944 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.944 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.944 03:01:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.944 03:01:03 -- host/discovery.sh@55 -- # sort 00:16:24.944 03:01:03 -- host/discovery.sh@55 -- # xargs 00:16:24.944 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.944 03:01:03 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:16:24.944 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.944 03:01:03 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:16:24.944 03:01:03 -- host/discovery.sh@79 -- # expected_count=2 00:16:24.944 03:01:03 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:16:24.945 03:01:03 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:16:24.945 03:01:03 -- common/autotest_common.sh@901 -- # local max=10 00:16:24.945 03:01:03 -- common/autotest_common.sh@902 -- # (( max-- )) 00:16:24.945 03:01:03 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:16:24.945 03:01:03 -- common/autotest_common.sh@903 -- # get_notification_count 00:16:24.945 03:01:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:24.945 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.945 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.945 03:01:03 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.945 03:01:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:24.945 03:01:03 -- host/discovery.sh@74 -- # notification_count=2 00:16:24.945 03:01:03 -- host/discovery.sh@75 -- # notify_id=4 00:16:24.945 03:01:03 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:16:24.945 03:01:03 -- common/autotest_common.sh@904 -- # return 0 00:16:24.945 03:01:03 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:24.945 03:01:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:24.945 03:01:03 -- common/autotest_common.sh@10 -- # set +x 00:16:25.879 [2024-04-23 03:01:05.003769] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:25.879 [2024-04-23 03:01:05.003800] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:25.879 [2024-04-23 03:01:05.003819] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:25.879 [2024-04-23 03:01:05.009801] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:26.136 [2024-04-23 03:01:05.069244] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:26.136 [2024-04-23 03:01:05.069287] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:26.136 03:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.136 03:01:05 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.136 03:01:05 -- common/autotest_common.sh@638 -- # local es=0 00:16:26.137 03:01:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.137 03:01:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.137 03:01:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 request: 00:16:26.137 { 00:16:26.137 "name": "nvme", 00:16:26.137 "trtype": "tcp", 00:16:26.137 "traddr": "10.0.0.2", 00:16:26.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:26.137 "adrfam": "ipv4", 00:16:26.137 "trsvcid": "8009", 00:16:26.137 "wait_for_attach": true, 00:16:26.137 "method": "bdev_nvme_start_discovery", 00:16:26.137 "req_id": 1 00:16:26.137 } 00:16:26.137 Got JSON-RPC error response 00:16:26.137 response: 00:16:26.137 { 00:16:26.137 "code": -17, 00:16:26.137 "message": "File exists" 00:16:26.137 } 00:16:26.137 03:01:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:26.137 03:01:05 -- common/autotest_common.sh@641 -- # es=1 00:16:26.137 03:01:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:26.137 03:01:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:26.137 03:01:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:26.137 03:01:05 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # sort 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # xargs 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 03:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.137 03:01:05 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:16:26.137 03:01:05 -- host/discovery.sh@146 -- # get_bdev_list 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # xargs 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # sort 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 03:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.137 03:01:05 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:26.137 03:01:05 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.137 03:01:05 -- common/autotest_common.sh@638 -- # local es=0 00:16:26.137 03:01:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.137 03:01:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:26.137 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.137 03:01:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 request: 00:16:26.137 { 00:16:26.137 "name": "nvme_second", 00:16:26.137 "trtype": "tcp", 00:16:26.137 "traddr": "10.0.0.2", 00:16:26.137 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:26.137 "adrfam": "ipv4", 00:16:26.137 "trsvcid": "8009", 00:16:26.137 "wait_for_attach": true, 00:16:26.137 "method": "bdev_nvme_start_discovery", 00:16:26.137 "req_id": 1 00:16:26.137 } 00:16:26.137 Got JSON-RPC error response 00:16:26.137 response: 00:16:26.137 { 00:16:26.137 "code": -17, 00:16:26.137 "message": "File exists" 00:16:26.137 } 00:16:26.137 03:01:05 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:26.137 03:01:05 -- common/autotest_common.sh@641 -- # es=1 00:16:26.137 03:01:05 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:26.137 03:01:05 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:26.137 03:01:05 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:26.137 03:01:05 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # sort 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # xargs 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 03:01:05 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:26.137 03:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.137 03:01:05 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:16:26.137 03:01:05 -- host/discovery.sh@152 -- # get_bdev_list 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:26.137 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.137 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # sort 00:16:26.137 03:01:05 -- host/discovery.sh@55 -- # xargs 00:16:26.395 03:01:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:26.395 03:01:05 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:26.395 03:01:05 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:26.395 03:01:05 -- common/autotest_common.sh@638 -- # local es=0 00:16:26.395 03:01:05 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:26.395 03:01:05 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:16:26.395 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.395 03:01:05 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:16:26.395 03:01:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:26.395 03:01:05 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:26.395 03:01:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:26.395 03:01:05 -- common/autotest_common.sh@10 -- # set +x 00:16:27.340 [2024-04-23 03:01:06.343178] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.340 [2024-04-23 03:01:06.343333] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.340 [2024-04-23 03:01:06.343421] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:27.340 [2024-04-23 03:01:06.343438] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163b0b0 with addr=10.0.0.2, port=8010 00:16:27.340 [2024-04-23 03:01:06.343457] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:27.340 [2024-04-23 03:01:06.343467] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:27.340 [2024-04-23 03:01:06.343476] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:28.279 [2024-04-23 03:01:07.343153] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.279 [2024-04-23 03:01:07.343313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.279 [2024-04-23 03:01:07.343356] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.279 [2024-04-23 03:01:07.343373] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x163b0b0 with addr=10.0.0.2, port=8010 00:16:28.279 [2024-04-23 03:01:07.343391] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:28.279 [2024-04-23 03:01:07.343401] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:28.279 [2024-04-23 03:01:07.343437] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:29.213 [2024-04-23 03:01:08.342991] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:29.213 request: 00:16:29.213 { 00:16:29.213 "name": "nvme_second", 00:16:29.213 "trtype": "tcp", 00:16:29.213 "traddr": "10.0.0.2", 00:16:29.213 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:29.213 "adrfam": "ipv4", 00:16:29.213 "trsvcid": "8010", 00:16:29.213 "attach_timeout_ms": 3000, 00:16:29.213 "method": "bdev_nvme_start_discovery", 00:16:29.213 "req_id": 1 00:16:29.213 } 00:16:29.213 Got JSON-RPC error response 00:16:29.213 response: 00:16:29.213 { 00:16:29.213 "code": -110, 00:16:29.213 "message": "Connection timed out" 00:16:29.213 } 00:16:29.213 03:01:08 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:16:29.213 03:01:08 -- common/autotest_common.sh@641 -- # es=1 00:16:29.213 03:01:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:29.213 03:01:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:29.213 03:01:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:29.213 03:01:08 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:16:29.213 03:01:08 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:29.213 03:01:08 -- host/discovery.sh@67 -- # sort 00:16:29.213 03:01:08 -- host/discovery.sh@67 -- # xargs 00:16:29.213 03:01:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.213 03:01:08 -- common/autotest_common.sh@10 -- # set +x 00:16:29.213 03:01:08 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:29.213 03:01:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.471 03:01:08 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:16:29.471 03:01:08 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:16:29.471 03:01:08 -- host/discovery.sh@161 -- # kill 89059 00:16:29.471 03:01:08 -- host/discovery.sh@162 -- # nvmftestfini 00:16:29.471 03:01:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:29.471 03:01:08 -- nvmf/common.sh@117 -- # sync 00:16:29.471 03:01:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:29.471 03:01:08 -- nvmf/common.sh@120 -- # set +e 00:16:29.471 03:01:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:29.471 03:01:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:29.471 rmmod nvme_tcp 00:16:29.471 rmmod nvme_fabrics 00:16:29.471 rmmod nvme_keyring 00:16:29.471 03:01:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:29.471 03:01:08 -- nvmf/common.sh@124 -- # set -e 00:16:29.471 03:01:08 -- nvmf/common.sh@125 -- # return 0 00:16:29.471 03:01:08 -- nvmf/common.sh@478 -- # '[' -n 89027 ']' 00:16:29.471 03:01:08 -- nvmf/common.sh@479 -- # killprocess 89027 00:16:29.471 03:01:08 -- common/autotest_common.sh@936 -- # '[' -z 89027 ']' 00:16:29.471 03:01:08 -- common/autotest_common.sh@940 -- # kill -0 89027 00:16:29.471 03:01:08 -- common/autotest_common.sh@941 -- # uname 00:16:29.471 03:01:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:29.471 03:01:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89027 00:16:29.471 killing process with pid 89027 00:16:29.471 03:01:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:29.471 03:01:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:29.471 03:01:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89027' 00:16:29.471 03:01:08 -- common/autotest_common.sh@955 -- # kill 89027 00:16:29.471 03:01:08 -- common/autotest_common.sh@960 -- # wait 89027 00:16:29.729 03:01:08 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:29.729 03:01:08 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:29.729 03:01:08 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:29.729 03:01:08 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.729 03:01:08 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:29.729 03:01:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.729 03:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.729 03:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.729 03:01:08 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:29.729 00:16:29.729 real 0m9.974s 00:16:29.729 user 0m19.404s 00:16:29.729 sys 0m1.803s 00:16:29.729 03:01:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:29.729 03:01:08 -- common/autotest_common.sh@10 -- # set +x 00:16:29.729 ************************************ 00:16:29.729 END TEST nvmf_discovery 00:16:29.729 ************************************ 00:16:29.729 03:01:08 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:29.729 03:01:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:29.729 03:01:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:29.729 03:01:08 -- common/autotest_common.sh@10 -- # set +x 00:16:29.729 ************************************ 00:16:29.729 START TEST nvmf_discovery_remove_ifc 00:16:29.729 ************************************ 00:16:29.729 03:01:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:29.987 * Looking for test storage... 00:16:29.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.987 03:01:08 -- nvmf/common.sh@7 -- # uname -s 00:16:29.987 03:01:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.987 03:01:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.987 03:01:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.987 03:01:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.987 03:01:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.987 03:01:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.987 03:01:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.987 03:01:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.987 03:01:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.987 03:01:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:29.987 03:01:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:29.987 03:01:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.987 03:01:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.987 03:01:08 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.987 03:01:08 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.987 03:01:08 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.987 03:01:08 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.987 03:01:08 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.987 03:01:08 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.987 03:01:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.987 03:01:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.987 03:01:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.987 03:01:08 -- paths/export.sh@5 -- # export PATH 00:16:29.987 03:01:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.987 03:01:08 -- nvmf/common.sh@47 -- # : 0 00:16:29.987 03:01:08 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:29.987 03:01:08 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:29.987 03:01:08 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.987 03:01:08 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.987 03:01:08 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.987 03:01:08 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:29.987 03:01:08 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:29.987 03:01:08 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:29.987 03:01:08 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:29.987 03:01:08 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:29.987 03:01:08 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.987 03:01:08 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:29.987 03:01:08 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:29.987 03:01:08 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:29.987 03:01:08 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.987 03:01:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.987 03:01:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.987 03:01:08 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:29.987 03:01:08 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:29.987 03:01:08 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.987 03:01:08 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.987 03:01:08 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:29.987 03:01:08 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:29.987 03:01:08 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.987 03:01:08 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.987 03:01:08 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.987 03:01:08 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.987 03:01:08 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.987 03:01:08 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.987 03:01:08 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.987 03:01:08 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.987 03:01:08 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:29.987 03:01:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:29.987 Cannot find device "nvmf_tgt_br" 00:16:29.987 03:01:09 -- nvmf/common.sh@155 -- # true 00:16:29.987 03:01:09 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.987 Cannot find device "nvmf_tgt_br2" 00:16:29.987 03:01:09 -- nvmf/common.sh@156 -- # true 00:16:29.987 03:01:09 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:29.987 03:01:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:29.987 Cannot find device "nvmf_tgt_br" 00:16:29.987 03:01:09 -- nvmf/common.sh@158 -- # true 00:16:29.987 03:01:09 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:29.987 Cannot find device "nvmf_tgt_br2" 00:16:29.987 03:01:09 -- nvmf/common.sh@159 -- # true 00:16:29.987 03:01:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:29.987 03:01:09 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:29.988 03:01:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.988 03:01:09 -- nvmf/common.sh@162 -- # true 00:16:29.988 03:01:09 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.988 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:29.988 03:01:09 -- nvmf/common.sh@163 -- # true 00:16:29.988 03:01:09 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:29.988 03:01:09 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:29.988 03:01:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:29.988 03:01:09 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.247 03:01:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.247 03:01:09 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.247 03:01:09 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.247 03:01:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.247 03:01:09 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.247 03:01:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:30.247 03:01:09 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:30.247 03:01:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:30.247 03:01:09 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:30.247 03:01:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.247 03:01:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.247 03:01:09 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.247 03:01:09 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:30.247 03:01:09 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:30.247 03:01:09 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.247 03:01:09 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.247 03:01:09 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.247 03:01:09 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.247 03:01:09 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.247 03:01:09 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:30.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:30.247 00:16:30.247 --- 10.0.0.2 ping statistics --- 00:16:30.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.247 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:30.247 03:01:09 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:30.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:16:30.247 00:16:30.247 --- 10.0.0.3 ping statistics --- 00:16:30.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.247 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:30.247 03:01:09 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:30.247 00:16:30.247 --- 10.0.0.1 ping statistics --- 00:16:30.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.247 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:30.247 03:01:09 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.247 03:01:09 -- nvmf/common.sh@422 -- # return 0 00:16:30.247 03:01:09 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:30.247 03:01:09 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.247 03:01:09 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:30.247 03:01:09 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:30.247 03:01:09 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.247 03:01:09 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:30.247 03:01:09 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:30.247 03:01:09 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:30.247 03:01:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:30.247 03:01:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:30.247 03:01:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.247 03:01:09 -- nvmf/common.sh@470 -- # nvmfpid=89514 00:16:30.247 03:01:09 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:30.247 03:01:09 -- nvmf/common.sh@471 -- # waitforlisten 89514 00:16:30.247 03:01:09 -- common/autotest_common.sh@817 -- # '[' -z 89514 ']' 00:16:30.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.247 03:01:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.247 03:01:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.247 03:01:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.247 03:01:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.247 03:01:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.506 [2024-04-23 03:01:09.409274] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:30.506 [2024-04-23 03:01:09.409598] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.506 [2024-04-23 03:01:09.534310] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:30.506 [2024-04-23 03:01:09.552790] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.506 [2024-04-23 03:01:09.603527] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.506 [2024-04-23 03:01:09.603616] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.506 [2024-04-23 03:01:09.603641] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.506 [2024-04-23 03:01:09.603658] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.506 [2024-04-23 03:01:09.603674] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.506 [2024-04-23 03:01:09.603724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.764 03:01:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:30.764 03:01:09 -- common/autotest_common.sh@850 -- # return 0 00:16:30.764 03:01:09 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:30.764 03:01:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:30.764 03:01:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.764 03:01:09 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:30.764 03:01:09 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:30.764 03:01:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:30.764 03:01:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.764 [2024-04-23 03:01:09.743705] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:30.764 [2024-04-23 03:01:09.751846] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:30.764 null0 00:16:30.764 [2024-04-23 03:01:09.783778] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:30.765 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:30.765 03:01:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:30.765 03:01:09 -- host/discovery_remove_ifc.sh@59 -- # hostpid=89533 00:16:30.765 03:01:09 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 89533 /tmp/host.sock 00:16:30.765 03:01:09 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:30.765 03:01:09 -- common/autotest_common.sh@817 -- # '[' -z 89533 ']' 00:16:30.765 03:01:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:16:30.765 03:01:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:30.765 03:01:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:30.765 03:01:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:30.765 03:01:09 -- common/autotest_common.sh@10 -- # set +x 00:16:30.765 [2024-04-23 03:01:09.860676] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:16:30.765 [2024-04-23 03:01:09.861139] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89533 ] 00:16:31.023 [2024-04-23 03:01:09.982833] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:16:31.023 [2024-04-23 03:01:10.003926] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.023 [2024-04-23 03:01:10.045598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.023 03:01:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:31.023 03:01:10 -- common/autotest_common.sh@850 -- # return 0 00:16:31.023 03:01:10 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:31.023 03:01:10 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:31.023 03:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.024 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:16:31.024 03:01:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.024 03:01:10 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:31.024 03:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.024 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:16:31.024 03:01:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.024 03:01:10 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:31.024 03:01:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.024 03:01:10 -- common/autotest_common.sh@10 -- # set +x 00:16:32.417 [2024-04-23 03:01:11.197073] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:32.417 [2024-04-23 03:01:11.197131] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:32.417 [2024-04-23 03:01:11.197149] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:32.417 [2024-04-23 03:01:11.203190] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:32.417 [2024-04-23 03:01:11.259675] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:32.417 [2024-04-23 03:01:11.259768] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:32.417 [2024-04-23 03:01:11.259811] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:32.417 [2024-04-23 03:01:11.259858] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:32.417 [2024-04-23 03:01:11.259898] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:32.417 03:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.417 [2024-04-23 03:01:11.265524] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xfb8b90 was disconnected and freed. delete nvme_qpair. 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.417 03:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.417 03:01:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.417 03:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:32.417 03:01:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:32.417 03:01:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:32.417 03:01:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:32.417 03:01:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.352 03:01:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.352 03:01:12 -- common/autotest_common.sh@10 -- # set +x 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.352 03:01:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.352 03:01:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:34.727 03:01:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:34.727 03:01:13 -- common/autotest_common.sh@10 -- # set +x 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:34.727 03:01:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:34.727 03:01:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.670 03:01:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:35.670 03:01:14 -- common/autotest_common.sh@10 -- # set +x 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.670 03:01:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.670 03:01:14 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.631 03:01:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:36.631 03:01:15 -- common/autotest_common.sh@10 -- # set +x 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.631 03:01:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.631 03:01:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.564 03:01:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.564 [2024-04-23 03:01:16.687609] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:37.565 [2024-04-23 03:01:16.687677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.565 [2024-04-23 03:01:16.687694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.565 [2024-04-23 03:01:16.687707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.565 [2024-04-23 03:01:16.687717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.565 [2024-04-23 03:01:16.687728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.565 [2024-04-23 03:01:16.687737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.565 [2024-04-23 03:01:16.687747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.565 [2024-04-23 03:01:16.687756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.565 [2024-04-23 03:01:16.687767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:37.565 [2024-04-23 03:01:16.687776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:37.565 [2024-04-23 03:01:16.687786] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7ce90 is same with the state(5) to be set 00:16:37.565 03:01:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.565 03:01:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.565 03:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:37.565 03:01:16 -- common/autotest_common.sh@10 -- # set +x 00:16:37.565 03:01:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.565 03:01:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.565 [2024-04-23 03:01:16.697601] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7ce90 (9): Bad file descriptor 00:16:37.565 [2024-04-23 03:01:16.707623] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:37.565 03:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:37.823 03:01:16 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.823 03:01:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.759 [2024-04-23 03:01:17.765267] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:38.759 03:01:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.759 03:01:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.759 03:01:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.759 03:01:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.759 03:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:38.759 03:01:17 -- common/autotest_common.sh@10 -- # set +x 00:16:38.759 03:01:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.707 [2024-04-23 03:01:18.788265] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:41.087 [2024-04-23 03:01:19.812275] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:41.087 [2024-04-23 03:01:19.812391] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf7ce90 with addr=10.0.0.2, port=4420 00:16:41.087 [2024-04-23 03:01:19.812425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf7ce90 is same with the state(5) to be set 00:16:41.087 [2024-04-23 03:01:19.813324] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7ce90 (9): Bad file descriptor 00:16:41.087 [2024-04-23 03:01:19.813390] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:41.087 [2024-04-23 03:01:19.813441] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:41.087 [2024-04-23 03:01:19.813511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.087 [2024-04-23 03:01:19.813540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.087 [2024-04-23 03:01:19.813569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.087 [2024-04-23 03:01:19.813589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.087 [2024-04-23 03:01:19.813611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.088 [2024-04-23 03:01:19.813631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.088 [2024-04-23 03:01:19.813652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.088 [2024-04-23 03:01:19.813671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.088 [2024-04-23 03:01:19.813693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.088 [2024-04-23 03:01:19.813713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.088 [2024-04-23 03:01:19.813733] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:41.088 [2024-04-23 03:01:19.813792] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf7d2a0 (9): Bad file descriptor 00:16:41.088 [2024-04-23 03:01:19.814793] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:41.088 [2024-04-23 03:01:19.814826] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:41.088 03:01:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.088 03:01:19 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:41.088 03:01:19 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.021 03:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.021 03:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.021 03:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.021 03:01:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.021 03:01:20 -- common/autotest_common.sh@10 -- # set +x 00:16:42.021 03:01:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:42.021 03:01:20 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:42.957 [2024-04-23 03:01:21.823903] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:42.957 [2024-04-23 03:01:21.823940] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:42.957 [2024-04-23 03:01:21.823956] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:42.957 [2024-04-23 03:01:21.829935] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:42.957 [2024-04-23 03:01:21.884862] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:42.957 [2024-04-23 03:01:21.884912] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:42.957 [2024-04-23 03:01:21.884934] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:42.957 [2024-04-23 03:01:21.884950] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:42.957 [2024-04-23 03:01:21.884960] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:42.957 [2024-04-23 03:01:21.892381] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xf6e960 was disconnected and freed. delete nvme_qpair. 00:16:42.957 03:01:21 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:42.957 03:01:21 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:42.957 03:01:21 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:42.957 03:01:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:42.957 03:01:21 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:42.957 03:01:21 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:42.957 03:01:21 -- common/autotest_common.sh@10 -- # set +x 00:16:42.957 03:01:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:42.957 03:01:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:42.957 03:01:22 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:42.957 03:01:22 -- host/discovery_remove_ifc.sh@90 -- # killprocess 89533 00:16:42.957 03:01:22 -- common/autotest_common.sh@936 -- # '[' -z 89533 ']' 00:16:42.957 03:01:22 -- common/autotest_common.sh@940 -- # kill -0 89533 00:16:42.957 03:01:22 -- common/autotest_common.sh@941 -- # uname 00:16:42.957 03:01:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:42.957 03:01:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89533 00:16:42.957 killing process with pid 89533 00:16:42.957 03:01:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:42.957 03:01:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:42.957 03:01:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89533' 00:16:42.957 03:01:22 -- common/autotest_common.sh@955 -- # kill 89533 00:16:42.957 03:01:22 -- common/autotest_common.sh@960 -- # wait 89533 00:16:43.222 03:01:22 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:43.222 03:01:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:43.222 03:01:22 -- nvmf/common.sh@117 -- # sync 00:16:43.222 03:01:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:43.222 03:01:22 -- nvmf/common.sh@120 -- # set +e 00:16:43.222 03:01:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:43.222 03:01:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:43.222 rmmod nvme_tcp 00:16:43.222 rmmod nvme_fabrics 00:16:43.222 rmmod nvme_keyring 00:16:43.222 03:01:22 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:43.222 03:01:22 -- nvmf/common.sh@124 -- # set -e 00:16:43.222 03:01:22 -- nvmf/common.sh@125 -- # return 0 00:16:43.222 03:01:22 -- nvmf/common.sh@478 -- # '[' -n 89514 ']' 00:16:43.222 03:01:22 -- nvmf/common.sh@479 -- # killprocess 89514 00:16:43.222 03:01:22 -- common/autotest_common.sh@936 -- # '[' -z 89514 ']' 00:16:43.222 03:01:22 -- common/autotest_common.sh@940 -- # kill -0 89514 00:16:43.223 03:01:22 -- common/autotest_common.sh@941 -- # uname 00:16:43.223 03:01:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:43.223 03:01:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89514 00:16:43.223 03:01:22 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:43.223 killing process with pid 89514 00:16:43.223 03:01:22 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:43.223 03:01:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89514' 00:16:43.223 03:01:22 -- common/autotest_common.sh@955 -- # kill 89514 00:16:43.223 03:01:22 -- common/autotest_common.sh@960 -- # wait 89514 00:16:43.481 03:01:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:43.481 03:01:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:43.481 03:01:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:43.481 03:01:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:43.481 03:01:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:43.481 03:01:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.481 03:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.481 03:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.482 03:01:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:43.482 00:16:43.482 real 0m13.675s 00:16:43.482 user 0m21.754s 00:16:43.482 sys 0m2.458s 00:16:43.482 03:01:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:43.482 03:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:43.482 ************************************ 00:16:43.482 END TEST nvmf_discovery_remove_ifc 00:16:43.482 ************************************ 00:16:43.482 03:01:22 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:43.482 03:01:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:43.482 03:01:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:43.482 03:01:22 -- common/autotest_common.sh@10 -- # set +x 00:16:43.774 ************************************ 00:16:43.774 START TEST nvmf_identify_kernel_target 00:16:43.774 ************************************ 00:16:43.774 03:01:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:43.774 * Looking for test storage... 00:16:43.774 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:43.774 03:01:22 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:43.774 03:01:22 -- nvmf/common.sh@7 -- # uname -s 00:16:43.774 03:01:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:43.774 03:01:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:43.774 03:01:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:43.774 03:01:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:43.774 03:01:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:43.774 03:01:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:43.774 03:01:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:43.774 03:01:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:43.774 03:01:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:43.774 03:01:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:43.774 03:01:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:43.774 03:01:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:43.774 03:01:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:43.774 03:01:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:43.774 03:01:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:43.774 03:01:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:43.774 03:01:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:43.774 03:01:22 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:43.774 03:01:22 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:43.774 03:01:22 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:43.774 03:01:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.775 03:01:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.775 03:01:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.775 03:01:22 -- paths/export.sh@5 -- # export PATH 00:16:43.775 03:01:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:43.775 03:01:22 -- nvmf/common.sh@47 -- # : 0 00:16:43.775 03:01:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:43.775 03:01:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:43.775 03:01:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:43.775 03:01:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:43.775 03:01:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:43.775 03:01:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:43.775 03:01:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:43.775 03:01:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:43.775 03:01:22 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:43.775 03:01:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:43.775 03:01:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:43.775 03:01:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:43.775 03:01:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:43.775 03:01:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:43.775 03:01:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:43.775 03:01:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:43.775 03:01:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:43.775 03:01:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:43.775 03:01:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:43.775 03:01:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:43.775 03:01:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:43.775 03:01:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:43.775 03:01:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:43.775 03:01:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.775 03:01:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.775 03:01:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:43.775 03:01:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:43.775 03:01:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:43.775 03:01:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:43.775 03:01:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:43.775 03:01:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.775 03:01:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:43.775 03:01:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:43.775 03:01:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:43.775 03:01:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:43.775 03:01:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:43.775 03:01:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:43.775 Cannot find device "nvmf_tgt_br" 00:16:43.775 03:01:22 -- nvmf/common.sh@155 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:43.775 Cannot find device "nvmf_tgt_br2" 00:16:43.775 03:01:22 -- nvmf/common.sh@156 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:43.775 03:01:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:43.775 Cannot find device "nvmf_tgt_br" 00:16:43.775 03:01:22 -- nvmf/common.sh@158 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:43.775 Cannot find device "nvmf_tgt_br2" 00:16:43.775 03:01:22 -- nvmf/common.sh@159 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:43.775 03:01:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:43.775 03:01:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:43.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.775 03:01:22 -- nvmf/common.sh@162 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:43.775 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:43.775 03:01:22 -- nvmf/common.sh@163 -- # true 00:16:43.775 03:01:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:43.775 03:01:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:44.033 03:01:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:44.033 03:01:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:44.033 03:01:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:44.033 03:01:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:44.033 03:01:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:44.033 03:01:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:44.033 03:01:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:44.033 03:01:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:44.033 03:01:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:44.033 03:01:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:44.033 03:01:23 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:44.033 03:01:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:44.033 03:01:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:44.033 03:01:23 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:44.033 03:01:23 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:44.033 03:01:23 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:44.033 03:01:23 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:44.033 03:01:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:44.033 03:01:23 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:44.033 03:01:23 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:44.033 03:01:23 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:44.033 03:01:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:44.033 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.033 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:16:44.033 00:16:44.033 --- 10.0.0.2 ping statistics --- 00:16:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.033 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:16:44.033 03:01:23 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:44.033 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:44.033 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:16:44.033 00:16:44.033 --- 10.0.0.3 ping statistics --- 00:16:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.033 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:44.033 03:01:23 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:44.033 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.033 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:16:44.033 00:16:44.033 --- 10.0.0.1 ping statistics --- 00:16:44.033 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.033 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:16:44.033 03:01:23 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.033 03:01:23 -- nvmf/common.sh@422 -- # return 0 00:16:44.033 03:01:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:44.033 03:01:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.033 03:01:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:44.033 03:01:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:44.033 03:01:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.033 03:01:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:44.033 03:01:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:44.033 03:01:23 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:44.033 03:01:23 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:44.033 03:01:23 -- nvmf/common.sh@717 -- # local ip 00:16:44.033 03:01:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:44.033 03:01:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:44.034 03:01:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.034 03:01:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.034 03:01:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:44.034 03:01:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.034 03:01:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:44.034 03:01:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:44.034 03:01:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:44.034 03:01:23 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:44.034 03:01:23 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:44.034 03:01:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:44.034 03:01:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:16:44.034 03:01:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:44.034 03:01:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:44.034 03:01:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:44.034 03:01:23 -- nvmf/common.sh@628 -- # local block nvme 00:16:44.034 03:01:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:16:44.034 03:01:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:16:44.034 03:01:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:44.034 03:01:23 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:44.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:44.598 Waiting for block devices as requested 00:16:44.598 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:44.598 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:44.598 03:01:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.598 03:01:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:44.598 03:01:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:16:44.598 03:01:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:44.598 03:01:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:44.598 03:01:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.599 03:01:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:16:44.599 03:01:23 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:44.599 03:01:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:44.856 No valid GPT data, bailing 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # pt= 00:16:44.856 03:01:23 -- scripts/common.sh@392 -- # return 1 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:16:44.856 03:01:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.856 03:01:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:16:44.856 03:01:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:44.856 03:01:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:44.856 03:01:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:16:44.856 03:01:23 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:44.856 03:01:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:44.856 No valid GPT data, bailing 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # pt= 00:16:44.856 03:01:23 -- scripts/common.sh@392 -- # return 1 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:16:44.856 03:01:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.856 03:01:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:16:44.856 03:01:23 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:44.856 03:01:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:44.856 03:01:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:16:44.856 03:01:23 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:44.856 03:01:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:44.856 No valid GPT data, bailing 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:44.856 03:01:23 -- scripts/common.sh@391 -- # pt= 00:16:44.856 03:01:23 -- scripts/common.sh@392 -- # return 1 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:16:44.856 03:01:23 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:44.856 03:01:23 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:16:44.856 03:01:23 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:44.856 03:01:23 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:44.856 03:01:23 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:44.856 03:01:23 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:16:44.856 03:01:23 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:44.856 03:01:23 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:45.114 No valid GPT data, bailing 00:16:45.115 03:01:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:45.115 03:01:24 -- scripts/common.sh@391 -- # pt= 00:16:45.115 03:01:24 -- scripts/common.sh@392 -- # return 1 00:16:45.115 03:01:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:16:45.115 03:01:24 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:16:45.115 03:01:24 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.115 03:01:24 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:45.115 03:01:24 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:45.115 03:01:24 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:45.115 03:01:24 -- nvmf/common.sh@656 -- # echo 1 00:16:45.115 03:01:24 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:16:45.115 03:01:24 -- nvmf/common.sh@658 -- # echo 1 00:16:45.115 03:01:24 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:16:45.115 03:01:24 -- nvmf/common.sh@661 -- # echo tcp 00:16:45.115 03:01:24 -- nvmf/common.sh@662 -- # echo 4420 00:16:45.115 03:01:24 -- nvmf/common.sh@663 -- # echo ipv4 00:16:45.115 03:01:24 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:45.115 03:01:24 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -a 10.0.0.1 -t tcp -s 4420 00:16:45.115 00:16:45.115 Discovery Log Number of Records 2, Generation counter 2 00:16:45.115 =====Discovery Log Entry 0====== 00:16:45.115 trtype: tcp 00:16:45.115 adrfam: ipv4 00:16:45.115 subtype: current discovery subsystem 00:16:45.115 treq: not specified, sq flow control disable supported 00:16:45.115 portid: 1 00:16:45.115 trsvcid: 4420 00:16:45.115 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:45.115 traddr: 10.0.0.1 00:16:45.115 eflags: none 00:16:45.115 sectype: none 00:16:45.115 =====Discovery Log Entry 1====== 00:16:45.115 trtype: tcp 00:16:45.115 adrfam: ipv4 00:16:45.115 subtype: nvme subsystem 00:16:45.115 treq: not specified, sq flow control disable supported 00:16:45.115 portid: 1 00:16:45.115 trsvcid: 4420 00:16:45.115 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:45.115 traddr: 10.0.0.1 00:16:45.115 eflags: none 00:16:45.115 sectype: none 00:16:45.115 03:01:24 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:45.115 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:45.374 ===================================================== 00:16:45.374 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:45.374 ===================================================== 00:16:45.374 Controller Capabilities/Features 00:16:45.374 ================================ 00:16:45.374 Vendor ID: 0000 00:16:45.374 Subsystem Vendor ID: 0000 00:16:45.374 Serial Number: ac331e32b2c163e53256 00:16:45.374 Model Number: Linux 00:16:45.374 Firmware Version: 6.7.0-68 00:16:45.374 Recommended Arb Burst: 0 00:16:45.374 IEEE OUI Identifier: 00 00 00 00:16:45.374 Multi-path I/O 00:16:45.374 May have multiple subsystem ports: No 00:16:45.374 May have multiple controllers: No 00:16:45.374 Associated with SR-IOV VF: No 00:16:45.374 Max Data Transfer Size: Unlimited 00:16:45.374 Max Number of Namespaces: 0 00:16:45.374 Max Number of I/O Queues: 1024 00:16:45.374 NVMe Specification Version (VS): 1.3 00:16:45.374 NVMe Specification Version (Identify): 1.3 00:16:45.374 Maximum Queue Entries: 1024 00:16:45.374 Contiguous Queues Required: No 00:16:45.374 Arbitration Mechanisms Supported 00:16:45.374 Weighted Round Robin: Not Supported 00:16:45.374 Vendor Specific: Not Supported 00:16:45.374 Reset Timeout: 7500 ms 00:16:45.374 Doorbell Stride: 4 bytes 00:16:45.374 NVM Subsystem Reset: Not Supported 00:16:45.374 Command Sets Supported 00:16:45.374 NVM Command Set: Supported 00:16:45.374 Boot Partition: Not Supported 00:16:45.374 Memory Page Size Minimum: 4096 bytes 00:16:45.374 Memory Page Size Maximum: 4096 bytes 00:16:45.374 Persistent Memory Region: Not Supported 00:16:45.374 Optional Asynchronous Events Supported 00:16:45.374 Namespace Attribute Notices: Not Supported 00:16:45.374 Firmware Activation Notices: Not Supported 00:16:45.374 ANA Change Notices: Not Supported 00:16:45.374 PLE Aggregate Log Change Notices: Not Supported 00:16:45.374 LBA Status Info Alert Notices: Not Supported 00:16:45.374 EGE Aggregate Log Change Notices: Not Supported 00:16:45.374 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.374 Zone Descriptor Change Notices: Not Supported 00:16:45.374 Discovery Log Change Notices: Supported 00:16:45.374 Controller Attributes 00:16:45.374 128-bit Host Identifier: Not Supported 00:16:45.374 Non-Operational Permissive Mode: Not Supported 00:16:45.374 NVM Sets: Not Supported 00:16:45.374 Read Recovery Levels: Not Supported 00:16:45.374 Endurance Groups: Not Supported 00:16:45.374 Predictable Latency Mode: Not Supported 00:16:45.374 Traffic Based Keep ALive: Not Supported 00:16:45.374 Namespace Granularity: Not Supported 00:16:45.374 SQ Associations: Not Supported 00:16:45.374 UUID List: Not Supported 00:16:45.374 Multi-Domain Subsystem: Not Supported 00:16:45.374 Fixed Capacity Management: Not Supported 00:16:45.374 Variable Capacity Management: Not Supported 00:16:45.374 Delete Endurance Group: Not Supported 00:16:45.374 Delete NVM Set: Not Supported 00:16:45.374 Extended LBA Formats Supported: Not Supported 00:16:45.374 Flexible Data Placement Supported: Not Supported 00:16:45.374 00:16:45.374 Controller Memory Buffer Support 00:16:45.374 ================================ 00:16:45.374 Supported: No 00:16:45.374 00:16:45.374 Persistent Memory Region Support 00:16:45.374 ================================ 00:16:45.374 Supported: No 00:16:45.374 00:16:45.374 Admin Command Set Attributes 00:16:45.374 ============================ 00:16:45.374 Security Send/Receive: Not Supported 00:16:45.374 Format NVM: Not Supported 00:16:45.374 Firmware Activate/Download: Not Supported 00:16:45.374 Namespace Management: Not Supported 00:16:45.374 Device Self-Test: Not Supported 00:16:45.374 Directives: Not Supported 00:16:45.374 NVMe-MI: Not Supported 00:16:45.374 Virtualization Management: Not Supported 00:16:45.374 Doorbell Buffer Config: Not Supported 00:16:45.374 Get LBA Status Capability: Not Supported 00:16:45.375 Command & Feature Lockdown Capability: Not Supported 00:16:45.375 Abort Command Limit: 1 00:16:45.375 Async Event Request Limit: 1 00:16:45.375 Number of Firmware Slots: N/A 00:16:45.375 Firmware Slot 1 Read-Only: N/A 00:16:45.375 Firmware Activation Without Reset: N/A 00:16:45.375 Multiple Update Detection Support: N/A 00:16:45.375 Firmware Update Granularity: No Information Provided 00:16:45.375 Per-Namespace SMART Log: No 00:16:45.375 Asymmetric Namespace Access Log Page: Not Supported 00:16:45.375 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:45.375 Command Effects Log Page: Not Supported 00:16:45.375 Get Log Page Extended Data: Supported 00:16:45.375 Telemetry Log Pages: Not Supported 00:16:45.375 Persistent Event Log Pages: Not Supported 00:16:45.375 Supported Log Pages Log Page: May Support 00:16:45.375 Commands Supported & Effects Log Page: Not Supported 00:16:45.375 Feature Identifiers & Effects Log Page:May Support 00:16:45.375 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.375 Data Area 4 for Telemetry Log: Not Supported 00:16:45.375 Error Log Page Entries Supported: 1 00:16:45.375 Keep Alive: Not Supported 00:16:45.375 00:16:45.375 NVM Command Set Attributes 00:16:45.375 ========================== 00:16:45.375 Submission Queue Entry Size 00:16:45.375 Max: 1 00:16:45.375 Min: 1 00:16:45.375 Completion Queue Entry Size 00:16:45.375 Max: 1 00:16:45.375 Min: 1 00:16:45.375 Number of Namespaces: 0 00:16:45.375 Compare Command: Not Supported 00:16:45.375 Write Uncorrectable Command: Not Supported 00:16:45.375 Dataset Management Command: Not Supported 00:16:45.375 Write Zeroes Command: Not Supported 00:16:45.375 Set Features Save Field: Not Supported 00:16:45.375 Reservations: Not Supported 00:16:45.375 Timestamp: Not Supported 00:16:45.375 Copy: Not Supported 00:16:45.375 Volatile Write Cache: Not Present 00:16:45.375 Atomic Write Unit (Normal): 1 00:16:45.375 Atomic Write Unit (PFail): 1 00:16:45.375 Atomic Compare & Write Unit: 1 00:16:45.375 Fused Compare & Write: Not Supported 00:16:45.375 Scatter-Gather List 00:16:45.375 SGL Command Set: Supported 00:16:45.375 SGL Keyed: Not Supported 00:16:45.375 SGL Bit Bucket Descriptor: Not Supported 00:16:45.375 SGL Metadata Pointer: Not Supported 00:16:45.375 Oversized SGL: Not Supported 00:16:45.375 SGL Metadata Address: Not Supported 00:16:45.375 SGL Offset: Supported 00:16:45.375 Transport SGL Data Block: Not Supported 00:16:45.375 Replay Protected Memory Block: Not Supported 00:16:45.375 00:16:45.375 Firmware Slot Information 00:16:45.375 ========================= 00:16:45.375 Active slot: 0 00:16:45.375 00:16:45.375 00:16:45.375 Error Log 00:16:45.375 ========= 00:16:45.375 00:16:45.375 Active Namespaces 00:16:45.375 ================= 00:16:45.375 Discovery Log Page 00:16:45.375 ================== 00:16:45.375 Generation Counter: 2 00:16:45.375 Number of Records: 2 00:16:45.375 Record Format: 0 00:16:45.375 00:16:45.375 Discovery Log Entry 0 00:16:45.375 ---------------------- 00:16:45.375 Transport Type: 3 (TCP) 00:16:45.375 Address Family: 1 (IPv4) 00:16:45.375 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:45.375 Entry Flags: 00:16:45.375 Duplicate Returned Information: 0 00:16:45.375 Explicit Persistent Connection Support for Discovery: 0 00:16:45.375 Transport Requirements: 00:16:45.375 Secure Channel: Not Specified 00:16:45.375 Port ID: 1 (0x0001) 00:16:45.375 Controller ID: 65535 (0xffff) 00:16:45.375 Admin Max SQ Size: 32 00:16:45.375 Transport Service Identifier: 4420 00:16:45.375 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:45.375 Transport Address: 10.0.0.1 00:16:45.375 Discovery Log Entry 1 00:16:45.375 ---------------------- 00:16:45.375 Transport Type: 3 (TCP) 00:16:45.375 Address Family: 1 (IPv4) 00:16:45.375 Subsystem Type: 2 (NVM Subsystem) 00:16:45.375 Entry Flags: 00:16:45.375 Duplicate Returned Information: 0 00:16:45.375 Explicit Persistent Connection Support for Discovery: 0 00:16:45.375 Transport Requirements: 00:16:45.375 Secure Channel: Not Specified 00:16:45.375 Port ID: 1 (0x0001) 00:16:45.375 Controller ID: 65535 (0xffff) 00:16:45.375 Admin Max SQ Size: 32 00:16:45.375 Transport Service Identifier: 4420 00:16:45.375 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:45.375 Transport Address: 10.0.0.1 00:16:45.375 03:01:24 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:45.375 get_feature(0x01) failed 00:16:45.375 get_feature(0x02) failed 00:16:45.375 get_feature(0x04) failed 00:16:45.375 ===================================================== 00:16:45.375 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:45.375 ===================================================== 00:16:45.375 Controller Capabilities/Features 00:16:45.375 ================================ 00:16:45.375 Vendor ID: 0000 00:16:45.375 Subsystem Vendor ID: 0000 00:16:45.375 Serial Number: b969eea2a02a70d02992 00:16:45.375 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:45.375 Firmware Version: 6.7.0-68 00:16:45.375 Recommended Arb Burst: 6 00:16:45.375 IEEE OUI Identifier: 00 00 00 00:16:45.375 Multi-path I/O 00:16:45.375 May have multiple subsystem ports: Yes 00:16:45.375 May have multiple controllers: Yes 00:16:45.375 Associated with SR-IOV VF: No 00:16:45.375 Max Data Transfer Size: Unlimited 00:16:45.375 Max Number of Namespaces: 1024 00:16:45.375 Max Number of I/O Queues: 128 00:16:45.375 NVMe Specification Version (VS): 1.3 00:16:45.375 NVMe Specification Version (Identify): 1.3 00:16:45.375 Maximum Queue Entries: 1024 00:16:45.375 Contiguous Queues Required: No 00:16:45.375 Arbitration Mechanisms Supported 00:16:45.375 Weighted Round Robin: Not Supported 00:16:45.375 Vendor Specific: Not Supported 00:16:45.375 Reset Timeout: 7500 ms 00:16:45.375 Doorbell Stride: 4 bytes 00:16:45.375 NVM Subsystem Reset: Not Supported 00:16:45.375 Command Sets Supported 00:16:45.375 NVM Command Set: Supported 00:16:45.375 Boot Partition: Not Supported 00:16:45.375 Memory Page Size Minimum: 4096 bytes 00:16:45.375 Memory Page Size Maximum: 4096 bytes 00:16:45.375 Persistent Memory Region: Not Supported 00:16:45.375 Optional Asynchronous Events Supported 00:16:45.375 Namespace Attribute Notices: Supported 00:16:45.375 Firmware Activation Notices: Not Supported 00:16:45.375 ANA Change Notices: Supported 00:16:45.375 PLE Aggregate Log Change Notices: Not Supported 00:16:45.375 LBA Status Info Alert Notices: Not Supported 00:16:45.375 EGE Aggregate Log Change Notices: Not Supported 00:16:45.375 Normal NVM Subsystem Shutdown event: Not Supported 00:16:45.375 Zone Descriptor Change Notices: Not Supported 00:16:45.375 Discovery Log Change Notices: Not Supported 00:16:45.375 Controller Attributes 00:16:45.375 128-bit Host Identifier: Supported 00:16:45.375 Non-Operational Permissive Mode: Not Supported 00:16:45.375 NVM Sets: Not Supported 00:16:45.375 Read Recovery Levels: Not Supported 00:16:45.375 Endurance Groups: Not Supported 00:16:45.375 Predictable Latency Mode: Not Supported 00:16:45.375 Traffic Based Keep ALive: Supported 00:16:45.375 Namespace Granularity: Not Supported 00:16:45.375 SQ Associations: Not Supported 00:16:45.375 UUID List: Not Supported 00:16:45.375 Multi-Domain Subsystem: Not Supported 00:16:45.375 Fixed Capacity Management: Not Supported 00:16:45.375 Variable Capacity Management: Not Supported 00:16:45.375 Delete Endurance Group: Not Supported 00:16:45.375 Delete NVM Set: Not Supported 00:16:45.375 Extended LBA Formats Supported: Not Supported 00:16:45.375 Flexible Data Placement Supported: Not Supported 00:16:45.375 00:16:45.375 Controller Memory Buffer Support 00:16:45.375 ================================ 00:16:45.375 Supported: No 00:16:45.375 00:16:45.375 Persistent Memory Region Support 00:16:45.375 ================================ 00:16:45.375 Supported: No 00:16:45.375 00:16:45.375 Admin Command Set Attributes 00:16:45.375 ============================ 00:16:45.375 Security Send/Receive: Not Supported 00:16:45.375 Format NVM: Not Supported 00:16:45.375 Firmware Activate/Download: Not Supported 00:16:45.375 Namespace Management: Not Supported 00:16:45.375 Device Self-Test: Not Supported 00:16:45.375 Directives: Not Supported 00:16:45.375 NVMe-MI: Not Supported 00:16:45.375 Virtualization Management: Not Supported 00:16:45.375 Doorbell Buffer Config: Not Supported 00:16:45.375 Get LBA Status Capability: Not Supported 00:16:45.375 Command & Feature Lockdown Capability: Not Supported 00:16:45.375 Abort Command Limit: 4 00:16:45.375 Async Event Request Limit: 4 00:16:45.375 Number of Firmware Slots: N/A 00:16:45.375 Firmware Slot 1 Read-Only: N/A 00:16:45.375 Firmware Activation Without Reset: N/A 00:16:45.375 Multiple Update Detection Support: N/A 00:16:45.375 Firmware Update Granularity: No Information Provided 00:16:45.375 Per-Namespace SMART Log: Yes 00:16:45.375 Asymmetric Namespace Access Log Page: Supported 00:16:45.375 ANA Transition Time : 10 sec 00:16:45.375 00:16:45.375 Asymmetric Namespace Access Capabilities 00:16:45.375 ANA Optimized State : Supported 00:16:45.375 ANA Non-Optimized State : Supported 00:16:45.375 ANA Inaccessible State : Supported 00:16:45.375 ANA Persistent Loss State : Supported 00:16:45.375 ANA Change State : Supported 00:16:45.375 ANAGRPID is not changed : No 00:16:45.375 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:45.375 00:16:45.375 ANA Group Identifier Maximum : 128 00:16:45.375 Number of ANA Group Identifiers : 128 00:16:45.375 Max Number of Allowed Namespaces : 1024 00:16:45.375 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:45.375 Command Effects Log Page: Supported 00:16:45.375 Get Log Page Extended Data: Supported 00:16:45.375 Telemetry Log Pages: Not Supported 00:16:45.375 Persistent Event Log Pages: Not Supported 00:16:45.375 Supported Log Pages Log Page: May Support 00:16:45.375 Commands Supported & Effects Log Page: Not Supported 00:16:45.375 Feature Identifiers & Effects Log Page:May Support 00:16:45.375 NVMe-MI Commands & Effects Log Page: May Support 00:16:45.375 Data Area 4 for Telemetry Log: Not Supported 00:16:45.375 Error Log Page Entries Supported: 128 00:16:45.375 Keep Alive: Supported 00:16:45.375 Keep Alive Granularity: 1000 ms 00:16:45.375 00:16:45.375 NVM Command Set Attributes 00:16:45.375 ========================== 00:16:45.375 Submission Queue Entry Size 00:16:45.375 Max: 64 00:16:45.375 Min: 64 00:16:45.375 Completion Queue Entry Size 00:16:45.375 Max: 16 00:16:45.375 Min: 16 00:16:45.375 Number of Namespaces: 1024 00:16:45.375 Compare Command: Not Supported 00:16:45.375 Write Uncorrectable Command: Not Supported 00:16:45.375 Dataset Management Command: Supported 00:16:45.375 Write Zeroes Command: Supported 00:16:45.375 Set Features Save Field: Not Supported 00:16:45.375 Reservations: Not Supported 00:16:45.375 Timestamp: Not Supported 00:16:45.375 Copy: Not Supported 00:16:45.375 Volatile Write Cache: Present 00:16:45.375 Atomic Write Unit (Normal): 1 00:16:45.376 Atomic Write Unit (PFail): 1 00:16:45.376 Atomic Compare & Write Unit: 1 00:16:45.376 Fused Compare & Write: Not Supported 00:16:45.376 Scatter-Gather List 00:16:45.376 SGL Command Set: Supported 00:16:45.376 SGL Keyed: Not Supported 00:16:45.376 SGL Bit Bucket Descriptor: Not Supported 00:16:45.376 SGL Metadata Pointer: Not Supported 00:16:45.376 Oversized SGL: Not Supported 00:16:45.376 SGL Metadata Address: Not Supported 00:16:45.376 SGL Offset: Supported 00:16:45.376 Transport SGL Data Block: Not Supported 00:16:45.376 Replay Protected Memory Block: Not Supported 00:16:45.376 00:16:45.376 Firmware Slot Information 00:16:45.376 ========================= 00:16:45.376 Active slot: 0 00:16:45.376 00:16:45.376 Asymmetric Namespace Access 00:16:45.376 =========================== 00:16:45.376 Change Count : 0 00:16:45.376 Number of ANA Group Descriptors : 1 00:16:45.376 ANA Group Descriptor : 0 00:16:45.376 ANA Group ID : 1 00:16:45.376 Number of NSID Values : 1 00:16:45.376 Change Count : 0 00:16:45.376 ANA State : 1 00:16:45.376 Namespace Identifier : 1 00:16:45.376 00:16:45.376 Commands Supported and Effects 00:16:45.376 ============================== 00:16:45.376 Admin Commands 00:16:45.376 -------------- 00:16:45.376 Get Log Page (02h): Supported 00:16:45.376 Identify (06h): Supported 00:16:45.376 Abort (08h): Supported 00:16:45.376 Set Features (09h): Supported 00:16:45.376 Get Features (0Ah): Supported 00:16:45.376 Asynchronous Event Request (0Ch): Supported 00:16:45.376 Keep Alive (18h): Supported 00:16:45.376 I/O Commands 00:16:45.376 ------------ 00:16:45.376 Flush (00h): Supported 00:16:45.376 Write (01h): Supported LBA-Change 00:16:45.376 Read (02h): Supported 00:16:45.376 Write Zeroes (08h): Supported LBA-Change 00:16:45.376 Dataset Management (09h): Supported 00:16:45.376 00:16:45.376 Error Log 00:16:45.376 ========= 00:16:45.376 Entry: 0 00:16:45.376 Error Count: 0x3 00:16:45.376 Submission Queue Id: 0x0 00:16:45.376 Command Id: 0x5 00:16:45.376 Phase Bit: 0 00:16:45.376 Status Code: 0x2 00:16:45.376 Status Code Type: 0x0 00:16:45.376 Do Not Retry: 1 00:16:45.376 Error Location: 0x28 00:16:45.376 LBA: 0x0 00:16:45.376 Namespace: 0x0 00:16:45.376 Vendor Log Page: 0x0 00:16:45.376 ----------- 00:16:45.376 Entry: 1 00:16:45.376 Error Count: 0x2 00:16:45.376 Submission Queue Id: 0x0 00:16:45.376 Command Id: 0x5 00:16:45.376 Phase Bit: 0 00:16:45.376 Status Code: 0x2 00:16:45.376 Status Code Type: 0x0 00:16:45.376 Do Not Retry: 1 00:16:45.376 Error Location: 0x28 00:16:45.376 LBA: 0x0 00:16:45.376 Namespace: 0x0 00:16:45.376 Vendor Log Page: 0x0 00:16:45.376 ----------- 00:16:45.376 Entry: 2 00:16:45.376 Error Count: 0x1 00:16:45.376 Submission Queue Id: 0x0 00:16:45.376 Command Id: 0x4 00:16:45.376 Phase Bit: 0 00:16:45.376 Status Code: 0x2 00:16:45.376 Status Code Type: 0x0 00:16:45.376 Do Not Retry: 1 00:16:45.376 Error Location: 0x28 00:16:45.376 LBA: 0x0 00:16:45.376 Namespace: 0x0 00:16:45.376 Vendor Log Page: 0x0 00:16:45.376 00:16:45.376 Number of Queues 00:16:45.376 ================ 00:16:45.376 Number of I/O Submission Queues: 128 00:16:45.376 Number of I/O Completion Queues: 128 00:16:45.376 00:16:45.376 ZNS Specific Controller Data 00:16:45.376 ============================ 00:16:45.376 Zone Append Size Limit: 0 00:16:45.376 00:16:45.376 00:16:45.376 Active Namespaces 00:16:45.376 ================= 00:16:45.376 get_feature(0x05) failed 00:16:45.376 Namespace ID:1 00:16:45.376 Command Set Identifier: NVM (00h) 00:16:45.376 Deallocate: Supported 00:16:45.376 Deallocated/Unwritten Error: Not Supported 00:16:45.376 Deallocated Read Value: Unknown 00:16:45.376 Deallocate in Write Zeroes: Not Supported 00:16:45.376 Deallocated Guard Field: 0xFFFF 00:16:45.376 Flush: Supported 00:16:45.376 Reservation: Not Supported 00:16:45.376 Namespace Sharing Capabilities: Multiple Controllers 00:16:45.376 Size (in LBAs): 1310720 (5GiB) 00:16:45.376 Capacity (in LBAs): 1310720 (5GiB) 00:16:45.376 Utilization (in LBAs): 1310720 (5GiB) 00:16:45.376 UUID: f188bb78-d5f7-4b91-8f76-440663195b54 00:16:45.376 Thin Provisioning: Not Supported 00:16:45.376 Per-NS Atomic Units: Yes 00:16:45.376 Atomic Boundary Size (Normal): 0 00:16:45.376 Atomic Boundary Size (PFail): 0 00:16:45.376 Atomic Boundary Offset: 0 00:16:45.376 NGUID/EUI64 Never Reused: No 00:16:45.376 ANA group ID: 1 00:16:45.376 Namespace Write Protected: No 00:16:45.376 Number of LBA Formats: 1 00:16:45.376 Current LBA Format: LBA Format #00 00:16:45.376 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:45.376 00:16:45.376 03:01:24 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:45.376 03:01:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:45.376 03:01:24 -- nvmf/common.sh@117 -- # sync 00:16:45.634 03:01:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:45.634 03:01:24 -- nvmf/common.sh@120 -- # set +e 00:16:45.635 03:01:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:45.635 03:01:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:45.635 rmmod nvme_tcp 00:16:45.635 rmmod nvme_fabrics 00:16:45.635 03:01:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:45.635 03:01:24 -- nvmf/common.sh@124 -- # set -e 00:16:45.635 03:01:24 -- nvmf/common.sh@125 -- # return 0 00:16:45.635 03:01:24 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:16:45.635 03:01:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:45.635 03:01:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:45.635 03:01:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:45.635 03:01:24 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.635 03:01:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:45.635 03:01:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.635 03:01:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.635 03:01:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.635 03:01:24 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:16:45.635 03:01:24 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:45.635 03:01:24 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:45.635 03:01:24 -- nvmf/common.sh@675 -- # echo 0 00:16:45.635 03:01:24 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.635 03:01:24 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:45.635 03:01:24 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:45.635 03:01:24 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:45.635 03:01:24 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:16:45.635 03:01:24 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:16:45.635 03:01:24 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:46.568 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:46.568 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.568 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:46.568 ************************************ 00:16:46.568 END TEST nvmf_identify_kernel_target 00:16:46.568 ************************************ 00:16:46.568 00:16:46.568 real 0m2.973s 00:16:46.568 user 0m1.017s 00:16:46.568 sys 0m1.392s 00:16:46.568 03:01:25 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.568 03:01:25 -- common/autotest_common.sh@10 -- # set +x 00:16:46.568 03:01:25 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:46.568 03:01:25 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:46.568 03:01:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.568 03:01:25 -- common/autotest_common.sh@10 -- # set +x 00:16:46.828 ************************************ 00:16:46.828 START TEST nvmf_auth 00:16:46.828 ************************************ 00:16:46.828 03:01:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:46.828 * Looking for test storage... 00:16:46.828 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:46.828 03:01:25 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:46.828 03:01:25 -- nvmf/common.sh@7 -- # uname -s 00:16:46.828 03:01:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:46.828 03:01:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:46.828 03:01:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:46.828 03:01:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:46.828 03:01:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:46.828 03:01:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:46.828 03:01:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:46.828 03:01:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:46.828 03:01:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:46.828 03:01:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:46.828 03:01:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:16:46.828 03:01:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:46.828 03:01:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:46.828 03:01:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:46.828 03:01:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:46.828 03:01:25 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:46.828 03:01:25 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:46.828 03:01:25 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:46.828 03:01:25 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:46.828 03:01:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.828 03:01:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.828 03:01:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.828 03:01:25 -- paths/export.sh@5 -- # export PATH 00:16:46.828 03:01:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:46.828 03:01:25 -- nvmf/common.sh@47 -- # : 0 00:16:46.828 03:01:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:46.828 03:01:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:46.828 03:01:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:46.828 03:01:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:46.828 03:01:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:46.828 03:01:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:46.828 03:01:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:46.828 03:01:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:46.828 03:01:25 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:46.828 03:01:25 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:46.828 03:01:25 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:46.828 03:01:25 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:46.828 03:01:25 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:46.828 03:01:25 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:46.828 03:01:25 -- host/auth.sh@21 -- # keys=() 00:16:46.828 03:01:25 -- host/auth.sh@77 -- # nvmftestinit 00:16:46.828 03:01:25 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:46.828 03:01:25 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:46.828 03:01:25 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:46.828 03:01:25 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:46.828 03:01:25 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:46.828 03:01:25 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:46.828 03:01:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:46.828 03:01:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:46.828 03:01:25 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:16:46.828 03:01:25 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:16:46.828 03:01:25 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:46.828 03:01:25 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:46.828 03:01:25 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:46.828 03:01:25 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:16:46.828 03:01:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:46.828 03:01:25 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:46.828 03:01:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:46.828 03:01:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:46.828 03:01:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:46.828 03:01:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:46.828 03:01:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:46.828 03:01:25 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:46.828 03:01:25 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:16:46.828 03:01:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:16:46.828 Cannot find device "nvmf_tgt_br" 00:16:46.828 03:01:25 -- nvmf/common.sh@155 -- # true 00:16:46.828 03:01:25 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:16:46.828 Cannot find device "nvmf_tgt_br2" 00:16:46.828 03:01:25 -- nvmf/common.sh@156 -- # true 00:16:46.828 03:01:25 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:16:46.828 03:01:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:16:46.828 Cannot find device "nvmf_tgt_br" 00:16:46.828 03:01:25 -- nvmf/common.sh@158 -- # true 00:16:46.828 03:01:25 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:16:46.828 Cannot find device "nvmf_tgt_br2" 00:16:46.828 03:01:25 -- nvmf/common.sh@159 -- # true 00:16:46.828 03:01:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:16:47.092 03:01:25 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:16:47.092 03:01:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:47.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.092 03:01:26 -- nvmf/common.sh@162 -- # true 00:16:47.092 03:01:26 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:47.092 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:47.092 03:01:26 -- nvmf/common.sh@163 -- # true 00:16:47.092 03:01:26 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:16:47.092 03:01:26 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:47.092 03:01:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:47.092 03:01:26 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:47.092 03:01:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:47.092 03:01:26 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:47.092 03:01:26 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:47.092 03:01:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:47.092 03:01:26 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:47.092 03:01:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:16:47.092 03:01:26 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:16:47.092 03:01:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:16:47.092 03:01:26 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:16:47.092 03:01:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:47.092 03:01:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:47.092 03:01:26 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:47.092 03:01:26 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:16:47.092 03:01:26 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:16:47.092 03:01:26 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:16:47.092 03:01:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:47.092 03:01:26 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:47.092 03:01:26 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:47.092 03:01:26 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:47.092 03:01:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:16:47.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:47.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:16:47.092 00:16:47.092 --- 10.0.0.2 ping statistics --- 00:16:47.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.092 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:16:47.092 03:01:26 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:16:47.353 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:47.353 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.100 ms 00:16:47.353 00:16:47.353 --- 10.0.0.3 ping statistics --- 00:16:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.353 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:16:47.353 03:01:26 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:47.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:47.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:16:47.353 00:16:47.353 --- 10.0.0.1 ping statistics --- 00:16:47.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:47.353 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:16:47.353 03:01:26 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:47.353 03:01:26 -- nvmf/common.sh@422 -- # return 0 00:16:47.353 03:01:26 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:47.353 03:01:26 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:47.353 03:01:26 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:47.353 03:01:26 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:47.353 03:01:26 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:47.353 03:01:26 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:47.353 03:01:26 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:47.353 03:01:26 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:16:47.353 03:01:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:47.353 03:01:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:47.353 03:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:47.353 03:01:26 -- nvmf/common.sh@470 -- # nvmfpid=90429 00:16:47.353 03:01:26 -- nvmf/common.sh@471 -- # waitforlisten 90429 00:16:47.353 03:01:26 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:47.353 03:01:26 -- common/autotest_common.sh@817 -- # '[' -z 90429 ']' 00:16:47.353 03:01:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.353 03:01:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.353 03:01:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.353 03:01:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.353 03:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:47.616 03:01:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:47.616 03:01:26 -- common/autotest_common.sh@850 -- # return 0 00:16:47.616 03:01:26 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:47.616 03:01:26 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:47.616 03:01:26 -- common/autotest_common.sh@10 -- # set +x 00:16:47.616 03:01:26 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:47.616 03:01:26 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:47.616 03:01:26 -- host/auth.sh@81 -- # gen_key null 32 00:16:47.616 03:01:26 -- host/auth.sh@53 -- # local digest len file key 00:16:47.616 03:01:26 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.616 03:01:26 -- host/auth.sh@54 -- # local -A digests 00:16:47.616 03:01:26 -- host/auth.sh@56 -- # digest=null 00:16:47.616 03:01:26 -- host/auth.sh@56 -- # len=32 00:16:47.616 03:01:26 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:47.616 03:01:26 -- host/auth.sh@57 -- # key=25761279248295b9c4b1f74589094fe6 00:16:47.616 03:01:26 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:16:47.616 03:01:26 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.Qd3 00:16:47.616 03:01:26 -- host/auth.sh@59 -- # format_dhchap_key 25761279248295b9c4b1f74589094fe6 0 00:16:47.616 03:01:26 -- nvmf/common.sh@708 -- # format_key DHHC-1 25761279248295b9c4b1f74589094fe6 0 00:16:47.616 03:01:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # key=25761279248295b9c4b1f74589094fe6 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # digest=0 00:16:47.616 03:01:26 -- nvmf/common.sh@694 -- # python - 00:16:47.616 03:01:26 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.Qd3 00:16:47.616 03:01:26 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.Qd3 00:16:47.616 03:01:26 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.Qd3 00:16:47.616 03:01:26 -- host/auth.sh@82 -- # gen_key null 48 00:16:47.616 03:01:26 -- host/auth.sh@53 -- # local digest len file key 00:16:47.616 03:01:26 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.616 03:01:26 -- host/auth.sh@54 -- # local -A digests 00:16:47.616 03:01:26 -- host/auth.sh@56 -- # digest=null 00:16:47.616 03:01:26 -- host/auth.sh@56 -- # len=48 00:16:47.616 03:01:26 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:47.616 03:01:26 -- host/auth.sh@57 -- # key=4d2947d13b1746eaed097275880bc514c1b7d2dd9969b23c 00:16:47.616 03:01:26 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:16:47.616 03:01:26 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.svp 00:16:47.616 03:01:26 -- host/auth.sh@59 -- # format_dhchap_key 4d2947d13b1746eaed097275880bc514c1b7d2dd9969b23c 0 00:16:47.616 03:01:26 -- nvmf/common.sh@708 -- # format_key DHHC-1 4d2947d13b1746eaed097275880bc514c1b7d2dd9969b23c 0 00:16:47.616 03:01:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # key=4d2947d13b1746eaed097275880bc514c1b7d2dd9969b23c 00:16:47.616 03:01:26 -- nvmf/common.sh@693 -- # digest=0 00:16:47.616 03:01:26 -- nvmf/common.sh@694 -- # python - 00:16:47.875 03:01:26 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.svp 00:16:47.875 03:01:26 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.svp 00:16:47.875 03:01:26 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.svp 00:16:47.875 03:01:26 -- host/auth.sh@83 -- # gen_key sha256 32 00:16:47.875 03:01:26 -- host/auth.sh@53 -- # local digest len file key 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # local -A digests 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # digest=sha256 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # len=32 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # key=562d082231511839975d24feaf13ce23 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.tOK 00:16:47.875 03:01:26 -- host/auth.sh@59 -- # format_dhchap_key 562d082231511839975d24feaf13ce23 1 00:16:47.875 03:01:26 -- nvmf/common.sh@708 -- # format_key DHHC-1 562d082231511839975d24feaf13ce23 1 00:16:47.875 03:01:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # key=562d082231511839975d24feaf13ce23 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # digest=1 00:16:47.875 03:01:26 -- nvmf/common.sh@694 -- # python - 00:16:47.875 03:01:26 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.tOK 00:16:47.875 03:01:26 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.tOK 00:16:47.875 03:01:26 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.tOK 00:16:47.875 03:01:26 -- host/auth.sh@84 -- # gen_key sha384 48 00:16:47.875 03:01:26 -- host/auth.sh@53 -- # local digest len file key 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # local -A digests 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # digest=sha384 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # len=48 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # key=fb8568b14a0a27578892f03d88888d491262548bf889f853 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.Aqx 00:16:47.875 03:01:26 -- host/auth.sh@59 -- # format_dhchap_key fb8568b14a0a27578892f03d88888d491262548bf889f853 2 00:16:47.875 03:01:26 -- nvmf/common.sh@708 -- # format_key DHHC-1 fb8568b14a0a27578892f03d88888d491262548bf889f853 2 00:16:47.875 03:01:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # key=fb8568b14a0a27578892f03d88888d491262548bf889f853 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # digest=2 00:16:47.875 03:01:26 -- nvmf/common.sh@694 -- # python - 00:16:47.875 03:01:26 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.Aqx 00:16:47.875 03:01:26 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.Aqx 00:16:47.875 03:01:26 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.Aqx 00:16:47.875 03:01:26 -- host/auth.sh@85 -- # gen_key sha512 64 00:16:47.875 03:01:26 -- host/auth.sh@53 -- # local digest len file key 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:47.875 03:01:26 -- host/auth.sh@54 -- # local -A digests 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # digest=sha512 00:16:47.875 03:01:26 -- host/auth.sh@56 -- # len=64 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:47.875 03:01:26 -- host/auth.sh@57 -- # key=4d79a432f268517af56dba670d28c41bf031ee553cf7e7fa4bb7488e9af79218 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:16:47.875 03:01:26 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.FRR 00:16:47.875 03:01:26 -- host/auth.sh@59 -- # format_dhchap_key 4d79a432f268517af56dba670d28c41bf031ee553cf7e7fa4bb7488e9af79218 3 00:16:47.875 03:01:26 -- nvmf/common.sh@708 -- # format_key DHHC-1 4d79a432f268517af56dba670d28c41bf031ee553cf7e7fa4bb7488e9af79218 3 00:16:47.875 03:01:26 -- nvmf/common.sh@691 -- # local prefix key digest 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # key=4d79a432f268517af56dba670d28c41bf031ee553cf7e7fa4bb7488e9af79218 00:16:47.875 03:01:26 -- nvmf/common.sh@693 -- # digest=3 00:16:47.875 03:01:26 -- nvmf/common.sh@694 -- # python - 00:16:47.875 03:01:27 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.FRR 00:16:47.875 03:01:27 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.FRR 00:16:47.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:47.875 03:01:27 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.FRR 00:16:47.875 03:01:27 -- host/auth.sh@87 -- # waitforlisten 90429 00:16:47.875 03:01:27 -- common/autotest_common.sh@817 -- # '[' -z 90429 ']' 00:16:47.875 03:01:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:47.875 03:01:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:47.875 03:01:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:47.875 03:01:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:47.875 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:48.440 03:01:27 -- common/autotest_common.sh@850 -- # return 0 00:16:48.440 03:01:27 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.440 03:01:27 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Qd3 00:16:48.440 03:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.440 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.440 03:01:27 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.440 03:01:27 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.svp 00:16:48.440 03:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.440 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.440 03:01:27 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.440 03:01:27 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tOK 00:16:48.440 03:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.440 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.440 03:01:27 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.440 03:01:27 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Aqx 00:16:48.440 03:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.440 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.440 03:01:27 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:16:48.440 03:01:27 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.FRR 00:16:48.440 03:01:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:48.440 03:01:27 -- common/autotest_common.sh@10 -- # set +x 00:16:48.440 03:01:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:48.440 03:01:27 -- host/auth.sh@92 -- # nvmet_auth_init 00:16:48.440 03:01:27 -- host/auth.sh@35 -- # get_main_ns_ip 00:16:48.440 03:01:27 -- nvmf/common.sh@717 -- # local ip 00:16:48.440 03:01:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:48.440 03:01:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:48.440 03:01:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.440 03:01:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.440 03:01:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:48.440 03:01:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.440 03:01:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:48.440 03:01:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:48.440 03:01:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:48.440 03:01:27 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:48.440 03:01:27 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:48.440 03:01:27 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:16:48.440 03:01:27 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:48.440 03:01:27 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:48.440 03:01:27 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:48.440 03:01:27 -- nvmf/common.sh@628 -- # local block nvme 00:16:48.440 03:01:27 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:16:48.440 03:01:27 -- nvmf/common.sh@631 -- # modprobe nvmet 00:16:48.440 03:01:27 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:48.440 03:01:27 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:48.698 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:48.698 Waiting for block devices as requested 00:16:48.698 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:48.956 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:49.546 03:01:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.546 03:01:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:16:49.546 03:01:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:49.546 03:01:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:49.546 03:01:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:16:49.546 03:01:28 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:49.546 03:01:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:49.546 No valid GPT data, bailing 00:16:49.546 03:01:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:49.546 03:01:28 -- scripts/common.sh@391 -- # pt= 00:16:49.546 03:01:28 -- scripts/common.sh@392 -- # return 1 00:16:49.546 03:01:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:16:49.546 03:01:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.546 03:01:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:16:49.546 03:01:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:49.546 03:01:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:49.546 03:01:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:16:49.546 03:01:28 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:49.546 03:01:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:49.546 No valid GPT data, bailing 00:16:49.546 03:01:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:49.546 03:01:28 -- scripts/common.sh@391 -- # pt= 00:16:49.546 03:01:28 -- scripts/common.sh@392 -- # return 1 00:16:49.546 03:01:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:16:49.546 03:01:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.546 03:01:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:16:49.546 03:01:28 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:49.546 03:01:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:49.546 03:01:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.546 03:01:28 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:16:49.546 03:01:28 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:49.546 03:01:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:49.805 No valid GPT data, bailing 00:16:49.805 03:01:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:49.805 03:01:28 -- scripts/common.sh@391 -- # pt= 00:16:49.805 03:01:28 -- scripts/common.sh@392 -- # return 1 00:16:49.805 03:01:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:16:49.805 03:01:28 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:16:49.805 03:01:28 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:49.805 03:01:28 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:16:49.805 03:01:28 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:49.805 03:01:28 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:49.805 03:01:28 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:49.805 03:01:28 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:16:49.806 03:01:28 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:49.806 03:01:28 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:49.806 No valid GPT data, bailing 00:16:49.806 03:01:28 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:49.806 03:01:28 -- scripts/common.sh@391 -- # pt= 00:16:49.806 03:01:28 -- scripts/common.sh@392 -- # return 1 00:16:49.806 03:01:28 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:16:49.806 03:01:28 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:16:49.806 03:01:28 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:49.806 03:01:28 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:49.806 03:01:28 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:49.806 03:01:28 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:49.806 03:01:28 -- nvmf/common.sh@656 -- # echo 1 00:16:49.806 03:01:28 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:16:49.806 03:01:28 -- nvmf/common.sh@658 -- # echo 1 00:16:49.806 03:01:28 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:16:49.806 03:01:28 -- nvmf/common.sh@661 -- # echo tcp 00:16:49.806 03:01:28 -- nvmf/common.sh@662 -- # echo 4420 00:16:49.806 03:01:28 -- nvmf/common.sh@663 -- # echo ipv4 00:16:49.806 03:01:28 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:49.806 03:01:28 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -a 10.0.0.1 -t tcp -s 4420 00:16:49.806 00:16:49.806 Discovery Log Number of Records 2, Generation counter 2 00:16:49.806 =====Discovery Log Entry 0====== 00:16:49.806 trtype: tcp 00:16:49.806 adrfam: ipv4 00:16:49.806 subtype: current discovery subsystem 00:16:49.806 treq: not specified, sq flow control disable supported 00:16:49.806 portid: 1 00:16:49.806 trsvcid: 4420 00:16:49.806 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:49.806 traddr: 10.0.0.1 00:16:49.806 eflags: none 00:16:49.806 sectype: none 00:16:49.806 =====Discovery Log Entry 1====== 00:16:49.806 trtype: tcp 00:16:49.806 adrfam: ipv4 00:16:49.806 subtype: nvme subsystem 00:16:49.806 treq: not specified, sq flow control disable supported 00:16:49.806 portid: 1 00:16:49.806 trsvcid: 4420 00:16:49.806 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:49.806 traddr: 10.0.0.1 00:16:49.806 eflags: none 00:16:49.806 sectype: none 00:16:49.806 03:01:28 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:49.806 03:01:28 -- host/auth.sh@37 -- # echo 0 00:16:49.806 03:01:28 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:49.806 03:01:28 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:49.806 03:01:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:49.806 03:01:28 -- host/auth.sh@44 -- # digest=sha256 00:16:49.806 03:01:28 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.806 03:01:28 -- host/auth.sh@44 -- # keyid=1 00:16:49.806 03:01:28 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:49.806 03:01:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:49.806 03:01:28 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.065 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:50.065 03:01:29 -- host/auth.sh@100 -- # IFS=, 00:16:50.065 03:01:29 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:16:50.065 03:01:29 -- host/auth.sh@100 -- # IFS=, 00:16:50.065 03:01:29 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.065 03:01:29 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:50.065 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.065 03:01:29 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:16:50.065 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.065 03:01:29 -- host/auth.sh@68 -- # keyid=1 00:16:50.065 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:50.065 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.065 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.065 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.065 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.065 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.065 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.065 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.065 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.065 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.065 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.065 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.065 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.065 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.065 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.065 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:50.065 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.065 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.065 nvme0n1 00:16:50.065 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.065 03:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.065 03:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.065 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.065 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.065 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.065 03:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.065 03:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.065 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.065 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.324 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.324 03:01:29 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:16:50.324 03:01:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.324 03:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.324 03:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:50.324 03:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.324 03:01:29 -- host/auth.sh@44 -- # digest=sha256 00:16:50.324 03:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.324 03:01:29 -- host/auth.sh@44 -- # keyid=0 00:16:50.324 03:01:29 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:50.324 03:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.324 03:01:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.324 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:50.324 03:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:16:50.324 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.324 03:01:29 -- host/auth.sh@68 -- # digest=sha256 00:16:50.324 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.324 03:01:29 -- host/auth.sh@68 -- # keyid=0 00:16:50.324 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.324 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.324 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.324 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.324 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.324 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.324 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.325 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.325 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.325 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:50.325 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.325 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.325 nvme0n1 00:16:50.325 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.325 03:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.325 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.325 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.325 03:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.325 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.325 03:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.325 03:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.325 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.325 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.325 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.325 03:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.325 03:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:50.325 03:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.325 03:01:29 -- host/auth.sh@44 -- # digest=sha256 00:16:50.325 03:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.325 03:01:29 -- host/auth.sh@44 -- # keyid=1 00:16:50.325 03:01:29 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:50.325 03:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.325 03:01:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.325 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:50.325 03:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:16:50.325 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.325 03:01:29 -- host/auth.sh@68 -- # digest=sha256 00:16:50.325 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.325 03:01:29 -- host/auth.sh@68 -- # keyid=1 00:16:50.325 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.325 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.325 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.325 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.325 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.325 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.325 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.325 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.325 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.325 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.325 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.325 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:50.325 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.325 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 nvme0n1 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.584 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.584 03:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.584 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.584 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.584 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.584 03:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:50.584 03:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.584 03:01:29 -- host/auth.sh@44 -- # digest=sha256 00:16:50.584 03:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.584 03:01:29 -- host/auth.sh@44 -- # keyid=2 00:16:50.584 03:01:29 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:50.584 03:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.584 03:01:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.584 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:50.584 03:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:16:50.584 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.584 03:01:29 -- host/auth.sh@68 -- # digest=sha256 00:16:50.584 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.584 03:01:29 -- host/auth.sh@68 -- # keyid=2 00:16:50.584 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.584 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.584 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.584 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.584 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.584 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.584 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.584 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.584 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.584 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.584 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.584 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.584 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.584 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:50.584 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.584 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 nvme0n1 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.584 03:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.584 03:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.584 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.584 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.584 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.844 03:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:50.844 03:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # digest=sha256 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # keyid=3 00:16:50.844 03:01:29 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:50.844 03:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.844 03:01:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:50.844 03:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:16:50.844 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # digest=sha256 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # keyid=3 00:16:50.844 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.844 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.844 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.844 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.844 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.844 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 nvme0n1 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.844 03:01:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:50.844 03:01:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:50.844 03:01:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # digest=sha256 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@44 -- # keyid=4 00:16:50.844 03:01:29 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:50.844 03:01:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:50.844 03:01:29 -- host/auth.sh@48 -- # echo ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:50.844 03:01:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:16:50.844 03:01:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # digest=sha256 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:16:50.844 03:01:29 -- host/auth.sh@68 -- # keyid=4 00:16:50.844 03:01:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:50.844 03:01:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:50.844 03:01:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:50.844 03:01:29 -- nvmf/common.sh@717 -- # local ip 00:16:50.844 03:01:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:50.844 03:01:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:50.844 03:01:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:50.844 03:01:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:50.844 03:01:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:50.844 03:01:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.844 03:01:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:50.844 03:01:29 -- common/autotest_common.sh@10 -- # set +x 00:16:51.103 nvme0n1 00:16:51.103 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.103 03:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.103 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.103 03:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.103 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.103 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.103 03:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.103 03:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.103 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.103 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.103 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.103 03:01:30 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.103 03:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.103 03:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:51.103 03:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.103 03:01:30 -- host/auth.sh@44 -- # digest=sha256 00:16:51.103 03:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.103 03:01:30 -- host/auth.sh@44 -- # keyid=0 00:16:51.104 03:01:30 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:51.104 03:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.104 03:01:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.374 03:01:30 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:51.374 03:01:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:16:51.374 03:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.374 03:01:30 -- host/auth.sh@68 -- # digest=sha256 00:16:51.374 03:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.374 03:01:30 -- host/auth.sh@68 -- # keyid=0 00:16:51.374 03:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.374 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.374 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.374 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.374 03:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.374 03:01:30 -- nvmf/common.sh@717 -- # local ip 00:16:51.374 03:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.374 03:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.374 03:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.374 03:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.374 03:01:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.374 03:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.374 03:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.374 03:01:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.374 03:01:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.374 03:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:51.374 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.374 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.647 nvme0n1 00:16:51.647 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.647 03:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.647 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.647 03:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.647 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.647 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.647 03:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.648 03:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.648 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.648 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.648 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.648 03:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.648 03:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:51.648 03:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.648 03:01:30 -- host/auth.sh@44 -- # digest=sha256 00:16:51.648 03:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.648 03:01:30 -- host/auth.sh@44 -- # keyid=1 00:16:51.648 03:01:30 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:51.648 03:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.648 03:01:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.648 03:01:30 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:51.648 03:01:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:16:51.648 03:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.648 03:01:30 -- host/auth.sh@68 -- # digest=sha256 00:16:51.648 03:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.648 03:01:30 -- host/auth.sh@68 -- # keyid=1 00:16:51.648 03:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.648 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.648 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.648 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.648 03:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.648 03:01:30 -- nvmf/common.sh@717 -- # local ip 00:16:51.648 03:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.648 03:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.648 03:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.648 03:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.648 03:01:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.648 03:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.648 03:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.648 03:01:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.648 03:01:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.648 03:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:51.648 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.648 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 nvme0n1 00:16:51.908 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.908 03:01:30 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.908 03:01:30 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.908 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.908 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.908 03:01:30 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.908 03:01:30 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.908 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.908 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.908 03:01:30 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:51.908 03:01:30 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:51.908 03:01:30 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:51.908 03:01:30 -- host/auth.sh@44 -- # digest=sha256 00:16:51.908 03:01:30 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.908 03:01:30 -- host/auth.sh@44 -- # keyid=2 00:16:51.908 03:01:30 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:51.908 03:01:30 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:51.908 03:01:30 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:51.908 03:01:30 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:51.908 03:01:30 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:16:51.908 03:01:30 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:51.908 03:01:30 -- host/auth.sh@68 -- # digest=sha256 00:16:51.908 03:01:30 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:51.908 03:01:30 -- host/auth.sh@68 -- # keyid=2 00:16:51.908 03:01:30 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:51.908 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.908 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 03:01:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.908 03:01:30 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:51.908 03:01:30 -- nvmf/common.sh@717 -- # local ip 00:16:51.908 03:01:30 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:51.908 03:01:30 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:51.908 03:01:30 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.908 03:01:30 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.908 03:01:30 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:51.908 03:01:30 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.908 03:01:30 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:51.908 03:01:30 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:51.908 03:01:30 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:51.908 03:01:30 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:51.908 03:01:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.908 03:01:30 -- common/autotest_common.sh@10 -- # set +x 00:16:51.908 nvme0n1 00:16:51.908 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:51.908 03:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.908 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:51.908 03:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:51.908 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.168 03:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.168 03:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.168 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.168 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.168 03:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.168 03:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:52.168 03:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.168 03:01:31 -- host/auth.sh@44 -- # digest=sha256 00:16:52.168 03:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.168 03:01:31 -- host/auth.sh@44 -- # keyid=3 00:16:52.168 03:01:31 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:52.168 03:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.168 03:01:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:52.168 03:01:31 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:52.168 03:01:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:16:52.168 03:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:52.168 03:01:31 -- host/auth.sh@68 -- # digest=sha256 00:16:52.168 03:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:52.168 03:01:31 -- host/auth.sh@68 -- # keyid=3 00:16:52.168 03:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.168 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.168 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.168 03:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:52.168 03:01:31 -- nvmf/common.sh@717 -- # local ip 00:16:52.168 03:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:52.168 03:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:52.168 03:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.168 03:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.168 03:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:52.168 03:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.168 03:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:52.168 03:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:52.168 03:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:52.168 03:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:52.168 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.168 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 nvme0n1 00:16:52.168 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.168 03:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.168 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.168 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.168 03:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:52.168 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.428 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.428 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.428 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.428 03:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:52.428 03:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # digest=sha256 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # keyid=4 00:16:52.428 03:01:31 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:52.428 03:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.428 03:01:31 -- host/auth.sh@48 -- # echo ffdhe3072 00:16:52.428 03:01:31 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:52.428 03:01:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:16:52.428 03:01:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:52.428 03:01:31 -- host/auth.sh@68 -- # digest=sha256 00:16:52.428 03:01:31 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:16:52.428 03:01:31 -- host/auth.sh@68 -- # keyid=4 00:16:52.428 03:01:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:52.428 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.428 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.428 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:52.428 03:01:31 -- nvmf/common.sh@717 -- # local ip 00:16:52.428 03:01:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:52.428 03:01:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:52.428 03:01:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.428 03:01:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.428 03:01:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:52.428 03:01:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.428 03:01:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:52.428 03:01:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:52.428 03:01:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:52.428 03:01:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.428 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.428 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.428 nvme0n1 00:16:52.428 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.428 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.428 03:01:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:52.428 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.428 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.428 03:01:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:52.428 03:01:31 -- common/autotest_common.sh@10 -- # set +x 00:16:52.428 03:01:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:52.428 03:01:31 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.428 03:01:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:52.428 03:01:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:52.428 03:01:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # digest=sha256 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.428 03:01:31 -- host/auth.sh@44 -- # keyid=0 00:16:52.428 03:01:31 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:52.428 03:01:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:52.428 03:01:31 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.364 03:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:53.364 03:01:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:16:53.364 03:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.364 03:01:32 -- host/auth.sh@68 -- # digest=sha256 00:16:53.364 03:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.364 03:01:32 -- host/auth.sh@68 -- # keyid=0 00:16:53.364 03:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.364 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.364 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.364 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.364 03:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.364 03:01:32 -- nvmf/common.sh@717 -- # local ip 00:16:53.364 03:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.364 03:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.364 03:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.364 03:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.364 03:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.364 03:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.364 03:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.364 03:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.364 03:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.364 03:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:53.364 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.364 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.364 nvme0n1 00:16:53.364 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.364 03:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.364 03:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.364 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.364 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.364 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.624 03:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.624 03:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.624 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.624 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.624 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.624 03:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:53.624 03:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:53.624 03:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:53.624 03:01:32 -- host/auth.sh@44 -- # digest=sha256 00:16:53.624 03:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.624 03:01:32 -- host/auth.sh@44 -- # keyid=1 00:16:53.624 03:01:32 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:53.624 03:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:53.624 03:01:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.624 03:01:32 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:53.624 03:01:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:16:53.624 03:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.624 03:01:32 -- host/auth.sh@68 -- # digest=sha256 00:16:53.624 03:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.624 03:01:32 -- host/auth.sh@68 -- # keyid=1 00:16:53.624 03:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.624 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.624 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.624 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.624 03:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.624 03:01:32 -- nvmf/common.sh@717 -- # local ip 00:16:53.624 03:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.624 03:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.624 03:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.624 03:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.624 03:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.624 03:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.624 03:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.624 03:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.624 03:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.625 03:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:53.625 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.625 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.625 nvme0n1 00:16:53.625 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.625 03:01:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.625 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.625 03:01:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:53.625 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.884 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.884 03:01:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.884 03:01:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.884 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.884 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.884 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.884 03:01:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:53.884 03:01:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:53.884 03:01:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:53.884 03:01:32 -- host/auth.sh@44 -- # digest=sha256 00:16:53.884 03:01:32 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:53.884 03:01:32 -- host/auth.sh@44 -- # keyid=2 00:16:53.884 03:01:32 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:53.884 03:01:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:53.884 03:01:32 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:53.884 03:01:32 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:53.884 03:01:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:16:53.884 03:01:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:53.884 03:01:32 -- host/auth.sh@68 -- # digest=sha256 00:16:53.884 03:01:32 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:53.884 03:01:32 -- host/auth.sh@68 -- # keyid=2 00:16:53.884 03:01:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:53.884 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.884 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:53.884 03:01:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:53.884 03:01:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:53.884 03:01:32 -- nvmf/common.sh@717 -- # local ip 00:16:53.884 03:01:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:53.884 03:01:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:53.884 03:01:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.884 03:01:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.884 03:01:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:53.884 03:01:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.884 03:01:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:53.884 03:01:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:53.884 03:01:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:53.884 03:01:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:53.884 03:01:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:53.884 03:01:32 -- common/autotest_common.sh@10 -- # set +x 00:16:54.147 nvme0n1 00:16:54.147 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.147 03:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.147 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.147 03:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:54.147 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.147 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.147 03:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.147 03:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.147 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.147 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.147 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.147 03:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:54.147 03:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:54.147 03:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:54.147 03:01:33 -- host/auth.sh@44 -- # digest=sha256 00:16:54.147 03:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.147 03:01:33 -- host/auth.sh@44 -- # keyid=3 00:16:54.147 03:01:33 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:54.147 03:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:54.147 03:01:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:54.147 03:01:33 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:54.147 03:01:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:16:54.147 03:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:54.147 03:01:33 -- host/auth.sh@68 -- # digest=sha256 00:16:54.147 03:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:54.147 03:01:33 -- host/auth.sh@68 -- # keyid=3 00:16:54.147 03:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.147 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.147 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.147 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.147 03:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:54.147 03:01:33 -- nvmf/common.sh@717 -- # local ip 00:16:54.147 03:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:54.147 03:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:54.147 03:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.147 03:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.147 03:01:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:54.147 03:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.147 03:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:54.147 03:01:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:54.147 03:01:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:54.147 03:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:54.147 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.147 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.408 nvme0n1 00:16:54.408 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.408 03:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.408 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.408 03:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:54.408 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.408 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.408 03:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.408 03:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.408 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.408 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.408 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.408 03:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:54.408 03:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:54.408 03:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:54.408 03:01:33 -- host/auth.sh@44 -- # digest=sha256 00:16:54.408 03:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:54.408 03:01:33 -- host/auth.sh@44 -- # keyid=4 00:16:54.408 03:01:33 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:54.408 03:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:54.408 03:01:33 -- host/auth.sh@48 -- # echo ffdhe4096 00:16:54.408 03:01:33 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:54.408 03:01:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:16:54.408 03:01:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:54.408 03:01:33 -- host/auth.sh@68 -- # digest=sha256 00:16:54.408 03:01:33 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:16:54.408 03:01:33 -- host/auth.sh@68 -- # keyid=4 00:16:54.408 03:01:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.408 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.408 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.408 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.408 03:01:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:54.408 03:01:33 -- nvmf/common.sh@717 -- # local ip 00:16:54.408 03:01:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:54.408 03:01:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:54.408 03:01:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.408 03:01:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.408 03:01:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:54.408 03:01:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.408 03:01:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:54.408 03:01:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:54.408 03:01:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:54.408 03:01:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.408 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.408 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.666 nvme0n1 00:16:54.666 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.666 03:01:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:54.666 03:01:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.666 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.666 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.666 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.666 03:01:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.666 03:01:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.666 03:01:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:54.666 03:01:33 -- common/autotest_common.sh@10 -- # set +x 00:16:54.666 03:01:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:54.666 03:01:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.666 03:01:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:54.666 03:01:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:54.666 03:01:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:54.666 03:01:33 -- host/auth.sh@44 -- # digest=sha256 00:16:54.666 03:01:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.666 03:01:33 -- host/auth.sh@44 -- # keyid=0 00:16:54.666 03:01:33 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:54.666 03:01:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:54.666 03:01:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:56.566 03:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:56.566 03:01:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:16:56.566 03:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:56.566 03:01:35 -- host/auth.sh@68 -- # digest=sha256 00:16:56.566 03:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:56.566 03:01:35 -- host/auth.sh@68 -- # keyid=0 00:16:56.566 03:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.566 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.566 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.566 03:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.566 03:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:56.566 03:01:35 -- nvmf/common.sh@717 -- # local ip 00:16:56.566 03:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:56.566 03:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:56.566 03:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.566 03:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.567 03:01:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:56.567 03:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.567 03:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:56.567 03:01:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:56.567 03:01:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:56.567 03:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:16:56.567 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.567 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.826 nvme0n1 00:16:56.826 03:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.826 03:01:35 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.826 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.826 03:01:35 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:56.826 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.826 03:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.826 03:01:35 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.826 03:01:35 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.826 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.826 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.826 03:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.826 03:01:35 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:56.826 03:01:35 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:56.826 03:01:35 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:56.826 03:01:35 -- host/auth.sh@44 -- # digest=sha256 00:16:56.826 03:01:35 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:56.826 03:01:35 -- host/auth.sh@44 -- # keyid=1 00:16:56.826 03:01:35 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:56.826 03:01:35 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:56.826 03:01:35 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:56.826 03:01:35 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:16:56.826 03:01:35 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:16:56.826 03:01:35 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:56.826 03:01:35 -- host/auth.sh@68 -- # digest=sha256 00:16:56.826 03:01:35 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:56.826 03:01:35 -- host/auth.sh@68 -- # keyid=1 00:16:56.826 03:01:35 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:56.826 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.826 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:56.826 03:01:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:56.826 03:01:35 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:56.826 03:01:35 -- nvmf/common.sh@717 -- # local ip 00:16:56.826 03:01:35 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:56.826 03:01:35 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:56.826 03:01:35 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.826 03:01:35 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.826 03:01:35 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:56.826 03:01:35 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.826 03:01:35 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:56.826 03:01:35 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:56.826 03:01:35 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:56.826 03:01:35 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:16:56.826 03:01:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:56.826 03:01:35 -- common/autotest_common.sh@10 -- # set +x 00:16:57.086 nvme0n1 00:16:57.086 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.086 03:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.086 03:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:57.086 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.086 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.345 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.345 03:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.345 03:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.345 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.345 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.345 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.345 03:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:57.345 03:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:57.345 03:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:57.345 03:01:36 -- host/auth.sh@44 -- # digest=sha256 00:16:57.345 03:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.345 03:01:36 -- host/auth.sh@44 -- # keyid=2 00:16:57.345 03:01:36 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:57.345 03:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:57.345 03:01:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:57.345 03:01:36 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:16:57.345 03:01:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:16:57.345 03:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:57.345 03:01:36 -- host/auth.sh@68 -- # digest=sha256 00:16:57.345 03:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:57.345 03:01:36 -- host/auth.sh@68 -- # keyid=2 00:16:57.345 03:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.345 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.345 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.345 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.345 03:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:57.345 03:01:36 -- nvmf/common.sh@717 -- # local ip 00:16:57.345 03:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:57.345 03:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:57.345 03:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.345 03:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.345 03:01:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:57.345 03:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.345 03:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:57.345 03:01:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:57.345 03:01:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:57.345 03:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:16:57.345 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.345 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.604 nvme0n1 00:16:57.604 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.604 03:01:36 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.604 03:01:36 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:57.604 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.604 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.604 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.604 03:01:36 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.604 03:01:36 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.604 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.604 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.604 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.604 03:01:36 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:57.604 03:01:36 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:57.604 03:01:36 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:57.604 03:01:36 -- host/auth.sh@44 -- # digest=sha256 00:16:57.604 03:01:36 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:57.604 03:01:36 -- host/auth.sh@44 -- # keyid=3 00:16:57.604 03:01:36 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:57.604 03:01:36 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:57.604 03:01:36 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:57.604 03:01:36 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:16:57.604 03:01:36 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:16:57.604 03:01:36 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:57.604 03:01:36 -- host/auth.sh@68 -- # digest=sha256 00:16:57.604 03:01:36 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:57.604 03:01:36 -- host/auth.sh@68 -- # keyid=3 00:16:57.604 03:01:36 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.604 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.604 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:57.863 03:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:57.863 03:01:36 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:57.863 03:01:36 -- nvmf/common.sh@717 -- # local ip 00:16:57.863 03:01:36 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:57.863 03:01:36 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:57.863 03:01:36 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.863 03:01:36 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.863 03:01:36 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:57.863 03:01:36 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.863 03:01:36 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:57.863 03:01:36 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:57.863 03:01:36 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:57.863 03:01:36 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:16:57.863 03:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:57.863 03:01:36 -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 nvme0n1 00:16:58.123 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.123 03:01:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.123 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.123 03:01:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:58.123 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.123 03:01:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.123 03:01:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.123 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.123 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.123 03:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:58.123 03:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:58.123 03:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:58.123 03:01:37 -- host/auth.sh@44 -- # digest=sha256 00:16:58.123 03:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:58.123 03:01:37 -- host/auth.sh@44 -- # keyid=4 00:16:58.123 03:01:37 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:58.123 03:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:58.123 03:01:37 -- host/auth.sh@48 -- # echo ffdhe6144 00:16:58.123 03:01:37 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:16:58.123 03:01:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:16:58.123 03:01:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:16:58.123 03:01:37 -- host/auth.sh@68 -- # digest=sha256 00:16:58.123 03:01:37 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:16:58.123 03:01:37 -- host/auth.sh@68 -- # keyid=4 00:16:58.123 03:01:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.123 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.123 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.123 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.123 03:01:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:16:58.123 03:01:37 -- nvmf/common.sh@717 -- # local ip 00:16:58.123 03:01:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:16:58.123 03:01:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:16:58.123 03:01:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.123 03:01:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.123 03:01:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:16:58.123 03:01:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.123 03:01:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:16:58.123 03:01:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:16:58.123 03:01:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:16:58.123 03:01:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:58.123 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.123 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 nvme0n1 00:16:58.383 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.383 03:01:37 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.383 03:01:37 -- host/auth.sh@73 -- # jq -r '.[].name' 00:16:58.383 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.383 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.383 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.642 03:01:37 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.642 03:01:37 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.642 03:01:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:58.642 03:01:37 -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 03:01:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:58.642 03:01:37 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.642 03:01:37 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:16:58.642 03:01:37 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:58.642 03:01:37 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:16:58.642 03:01:37 -- host/auth.sh@44 -- # digest=sha256 00:16:58.642 03:01:37 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:58.642 03:01:37 -- host/auth.sh@44 -- # keyid=0 00:16:58.642 03:01:37 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:16:58.642 03:01:37 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:16:58.642 03:01:37 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:02.856 03:01:41 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:02.856 03:01:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:17:02.856 03:01:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:02.856 03:01:41 -- host/auth.sh@68 -- # digest=sha256 00:17:02.856 03:01:41 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:02.856 03:01:41 -- host/auth.sh@68 -- # keyid=0 00:17:02.856 03:01:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.857 03:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.857 03:01:41 -- common/autotest_common.sh@10 -- # set +x 00:17:02.857 03:01:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:02.857 03:01:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:02.857 03:01:41 -- nvmf/common.sh@717 -- # local ip 00:17:02.857 03:01:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:02.857 03:01:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:02.857 03:01:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.857 03:01:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.857 03:01:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:02.857 03:01:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.857 03:01:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:02.857 03:01:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:02.857 03:01:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:02.857 03:01:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:02.857 03:01:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:02.857 03:01:41 -- common/autotest_common.sh@10 -- # set +x 00:17:03.116 nvme0n1 00:17:03.116 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.116 03:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.116 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.116 03:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:03.116 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.116 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.116 03:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.116 03:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.116 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.116 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.116 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.116 03:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:03.116 03:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:17:03.116 03:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:03.116 03:01:42 -- host/auth.sh@44 -- # digest=sha256 00:17:03.116 03:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:03.116 03:01:42 -- host/auth.sh@44 -- # keyid=1 00:17:03.116 03:01:42 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:03.116 03:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:03.116 03:01:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:03.116 03:01:42 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:03.116 03:01:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:17:03.116 03:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:03.116 03:01:42 -- host/auth.sh@68 -- # digest=sha256 00:17:03.116 03:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:03.116 03:01:42 -- host/auth.sh@68 -- # keyid=1 00:17:03.116 03:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.116 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.116 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.116 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:03.116 03:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:03.116 03:01:42 -- nvmf/common.sh@717 -- # local ip 00:17:03.116 03:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:03.116 03:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:03.116 03:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.116 03:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.116 03:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:03.116 03:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.116 03:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:03.116 03:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:03.116 03:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:03.116 03:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:03.116 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:03.116 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 nvme0n1 00:17:04.051 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.051 03:01:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.051 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.051 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 03:01:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:04.051 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.051 03:01:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.051 03:01:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.051 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.051 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.051 03:01:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:04.051 03:01:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:17:04.051 03:01:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:04.051 03:01:42 -- host/auth.sh@44 -- # digest=sha256 00:17:04.051 03:01:42 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.051 03:01:42 -- host/auth.sh@44 -- # keyid=2 00:17:04.051 03:01:42 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:04.051 03:01:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:04.051 03:01:42 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:04.051 03:01:42 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:04.051 03:01:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:17:04.051 03:01:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:04.051 03:01:42 -- host/auth.sh@68 -- # digest=sha256 00:17:04.051 03:01:42 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:04.051 03:01:42 -- host/auth.sh@68 -- # keyid=2 00:17:04.051 03:01:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.051 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.051 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.051 03:01:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.051 03:01:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:04.051 03:01:42 -- nvmf/common.sh@717 -- # local ip 00:17:04.051 03:01:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:04.051 03:01:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:04.051 03:01:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.051 03:01:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.051 03:01:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:04.051 03:01:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.051 03:01:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:04.051 03:01:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:04.051 03:01:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:04.051 03:01:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:04.051 03:01:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.051 03:01:42 -- common/autotest_common.sh@10 -- # set +x 00:17:04.617 nvme0n1 00:17:04.617 03:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.617 03:01:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.617 03:01:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:04.617 03:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.617 03:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:04.617 03:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.617 03:01:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.617 03:01:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.617 03:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.617 03:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:04.617 03:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.617 03:01:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:04.617 03:01:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:17:04.617 03:01:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:04.617 03:01:43 -- host/auth.sh@44 -- # digest=sha256 00:17:04.617 03:01:43 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.617 03:01:43 -- host/auth.sh@44 -- # keyid=3 00:17:04.617 03:01:43 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:04.617 03:01:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:04.617 03:01:43 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:04.617 03:01:43 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:04.617 03:01:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:17:04.617 03:01:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:04.617 03:01:43 -- host/auth.sh@68 -- # digest=sha256 00:17:04.617 03:01:43 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:04.617 03:01:43 -- host/auth.sh@68 -- # keyid=3 00:17:04.617 03:01:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:04.617 03:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.617 03:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:04.617 03:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:04.617 03:01:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:04.617 03:01:43 -- nvmf/common.sh@717 -- # local ip 00:17:04.617 03:01:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:04.617 03:01:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:04.617 03:01:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.617 03:01:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.876 03:01:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:04.876 03:01:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.876 03:01:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:04.876 03:01:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:04.876 03:01:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:04.876 03:01:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:04.876 03:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:04.876 03:01:43 -- common/autotest_common.sh@10 -- # set +x 00:17:05.444 nvme0n1 00:17:05.444 03:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.444 03:01:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.444 03:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.444 03:01:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:05.444 03:01:44 -- common/autotest_common.sh@10 -- # set +x 00:17:05.445 03:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.445 03:01:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.445 03:01:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.445 03:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.445 03:01:44 -- common/autotest_common.sh@10 -- # set +x 00:17:05.445 03:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.445 03:01:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:05.445 03:01:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:17:05.445 03:01:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:05.445 03:01:44 -- host/auth.sh@44 -- # digest=sha256 00:17:05.445 03:01:44 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.445 03:01:44 -- host/auth.sh@44 -- # keyid=4 00:17:05.445 03:01:44 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:05.445 03:01:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:05.445 03:01:44 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:05.445 03:01:44 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:05.445 03:01:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:17:05.445 03:01:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:05.445 03:01:44 -- host/auth.sh@68 -- # digest=sha256 00:17:05.445 03:01:44 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:05.445 03:01:44 -- host/auth.sh@68 -- # keyid=4 00:17:05.445 03:01:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.445 03:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.445 03:01:44 -- common/autotest_common.sh@10 -- # set +x 00:17:05.445 03:01:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:05.445 03:01:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:05.445 03:01:44 -- nvmf/common.sh@717 -- # local ip 00:17:05.445 03:01:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:05.445 03:01:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:05.445 03:01:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.445 03:01:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.445 03:01:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:05.445 03:01:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.445 03:01:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:05.445 03:01:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:05.445 03:01:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:05.445 03:01:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:05.445 03:01:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:05.445 03:01:44 -- common/autotest_common.sh@10 -- # set +x 00:17:06.381 nvme0n1 00:17:06.381 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.381 03:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.381 03:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.381 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.381 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.381 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.381 03:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.381 03:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.381 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.381 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.381 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.381 03:01:45 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:17:06.381 03:01:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:06.381 03:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.381 03:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:17:06.381 03:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.381 03:01:45 -- host/auth.sh@44 -- # digest=sha384 00:17:06.381 03:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.381 03:01:45 -- host/auth.sh@44 -- # keyid=0 00:17:06.381 03:01:45 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:06.381 03:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.381 03:01:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.381 03:01:45 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:06.381 03:01:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:17:06.381 03:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.381 03:01:45 -- host/auth.sh@68 -- # digest=sha384 00:17:06.381 03:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.381 03:01:45 -- host/auth.sh@68 -- # keyid=0 00:17:06.381 03:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.381 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.381 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.381 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.381 03:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.382 03:01:45 -- nvmf/common.sh@717 -- # local ip 00:17:06.382 03:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.382 03:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.382 03:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.382 03:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.382 03:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.382 03:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.382 03:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.382 03:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.382 03:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.382 03:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:06.382 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.382 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 nvme0n1 00:17:06.382 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.382 03:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.382 03:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.382 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.382 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.382 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.640 03:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:17:06.640 03:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # digest=sha384 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # keyid=1 00:17:06.640 03:01:45 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:06.640 03:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.640 03:01:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:06.640 03:01:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:17:06.640 03:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # digest=sha384 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # keyid=1 00:17:06.640 03:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.640 03:01:45 -- nvmf/common.sh@717 -- # local ip 00:17:06.640 03:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.640 03:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.640 03:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.640 03:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.640 03:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.640 03:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.640 03:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.640 03:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.640 03:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.640 03:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 nvme0n1 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.640 03:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.640 03:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:17:06.640 03:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # digest=sha384 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@44 -- # keyid=2 00:17:06.640 03:01:45 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:06.640 03:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.640 03:01:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:06.640 03:01:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:17:06.640 03:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # digest=sha384 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.640 03:01:45 -- host/auth.sh@68 -- # keyid=2 00:17:06.640 03:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.640 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.640 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.640 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.640 03:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.640 03:01:45 -- nvmf/common.sh@717 -- # local ip 00:17:06.640 03:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.640 03:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.641 03:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.641 03:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.641 03:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.641 03:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.641 03:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.641 03:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.641 03:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.641 03:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:06.641 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.641 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.899 nvme0n1 00:17:06.899 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.899 03:01:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.899 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.899 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.899 03:01:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.899 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.899 03:01:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.899 03:01:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.899 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.899 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.899 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.899 03:01:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:06.899 03:01:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:17:06.899 03:01:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:06.899 03:01:45 -- host/auth.sh@44 -- # digest=sha384 00:17:06.899 03:01:45 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:06.899 03:01:45 -- host/auth.sh@44 -- # keyid=3 00:17:06.899 03:01:45 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:06.899 03:01:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:06.899 03:01:45 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:06.899 03:01:45 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:06.899 03:01:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:17:06.900 03:01:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:06.900 03:01:45 -- host/auth.sh@68 -- # digest=sha384 00:17:06.900 03:01:45 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:06.900 03:01:45 -- host/auth.sh@68 -- # keyid=3 00:17:06.900 03:01:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:06.900 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.900 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.900 03:01:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.900 03:01:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:06.900 03:01:45 -- nvmf/common.sh@717 -- # local ip 00:17:06.900 03:01:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:06.900 03:01:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:06.900 03:01:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.900 03:01:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.900 03:01:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:06.900 03:01:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.900 03:01:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:06.900 03:01:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:06.900 03:01:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:06.900 03:01:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:06.900 03:01:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.900 03:01:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.900 nvme0n1 00:17:06.900 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:06.900 03:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.900 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:06.900 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:06.900 03:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:06.900 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.157 03:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.157 03:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.157 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.157 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.157 03:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.157 03:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:17:07.157 03:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.157 03:01:46 -- host/auth.sh@44 -- # digest=sha384 00:17:07.157 03:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.157 03:01:46 -- host/auth.sh@44 -- # keyid=4 00:17:07.157 03:01:46 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:07.157 03:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.157 03:01:46 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:07.157 03:01:46 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:07.157 03:01:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:17:07.157 03:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.157 03:01:46 -- host/auth.sh@68 -- # digest=sha384 00:17:07.157 03:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:07.157 03:01:46 -- host/auth.sh@68 -- # keyid=4 00:17:07.157 03:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:07.157 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.157 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.157 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.157 03:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.157 03:01:46 -- nvmf/common.sh@717 -- # local ip 00:17:07.157 03:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.157 03:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.157 03:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.157 03:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.157 03:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.157 03:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.157 03:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.157 03:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.157 03:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.157 03:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.158 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.158 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 nvme0n1 00:17:07.158 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.158 03:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.158 03:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.158 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.158 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.158 03:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.158 03:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.158 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.158 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.158 03:01:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:07.158 03:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.158 03:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:17:07.158 03:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.158 03:01:46 -- host/auth.sh@44 -- # digest=sha384 00:17:07.158 03:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.158 03:01:46 -- host/auth.sh@44 -- # keyid=0 00:17:07.158 03:01:46 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:07.158 03:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.158 03:01:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.158 03:01:46 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:07.158 03:01:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:17:07.158 03:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.158 03:01:46 -- host/auth.sh@68 -- # digest=sha384 00:17:07.158 03:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.158 03:01:46 -- host/auth.sh@68 -- # keyid=0 00:17:07.158 03:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.158 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.158 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.158 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.158 03:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.158 03:01:46 -- nvmf/common.sh@717 -- # local ip 00:17:07.158 03:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.158 03:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.158 03:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.158 03:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.158 03:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.158 03:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.158 03:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.158 03:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.158 03:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.158 03:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:07.158 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.158 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 nvme0n1 00:17:07.418 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.418 03:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.418 03:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.418 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.418 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.418 03:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.418 03:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.418 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.418 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.418 03:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.418 03:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:17:07.418 03:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.418 03:01:46 -- host/auth.sh@44 -- # digest=sha384 00:17:07.418 03:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.418 03:01:46 -- host/auth.sh@44 -- # keyid=1 00:17:07.418 03:01:46 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:07.418 03:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.418 03:01:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.418 03:01:46 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:07.418 03:01:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:17:07.418 03:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.418 03:01:46 -- host/auth.sh@68 -- # digest=sha384 00:17:07.418 03:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.418 03:01:46 -- host/auth.sh@68 -- # keyid=1 00:17:07.418 03:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.418 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.418 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.418 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.418 03:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.418 03:01:46 -- nvmf/common.sh@717 -- # local ip 00:17:07.418 03:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.418 03:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.418 03:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.418 03:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.418 03:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.418 03:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.418 03:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.418 03:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.418 03:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.418 03:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:07.418 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.418 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 nvme0n1 00:17:07.694 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.694 03:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.694 03:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.694 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.694 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.694 03:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.694 03:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.694 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.694 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.694 03:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.694 03:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:17:07.694 03:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.694 03:01:46 -- host/auth.sh@44 -- # digest=sha384 00:17:07.694 03:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.694 03:01:46 -- host/auth.sh@44 -- # keyid=2 00:17:07.694 03:01:46 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:07.694 03:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.694 03:01:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.694 03:01:46 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:07.694 03:01:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:17:07.694 03:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.694 03:01:46 -- host/auth.sh@68 -- # digest=sha384 00:17:07.694 03:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.694 03:01:46 -- host/auth.sh@68 -- # keyid=2 00:17:07.694 03:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.694 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.694 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.694 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.694 03:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.694 03:01:46 -- nvmf/common.sh@717 -- # local ip 00:17:07.694 03:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.694 03:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.694 03:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.694 03:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.694 03:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.694 03:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.694 03:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.694 03:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.694 03:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.694 03:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:07.694 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.694 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 nvme0n1 00:17:07.964 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.964 03:01:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.964 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.964 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 03:01:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.964 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.964 03:01:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.964 03:01:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.964 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.964 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.964 03:01:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:07.964 03:01:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:17:07.964 03:01:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:07.964 03:01:46 -- host/auth.sh@44 -- # digest=sha384 00:17:07.964 03:01:46 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:07.964 03:01:46 -- host/auth.sh@44 -- # keyid=3 00:17:07.964 03:01:46 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:07.964 03:01:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:07.964 03:01:46 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:07.964 03:01:46 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:07.964 03:01:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:17:07.964 03:01:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:07.964 03:01:46 -- host/auth.sh@68 -- # digest=sha384 00:17:07.964 03:01:46 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:07.964 03:01:46 -- host/auth.sh@68 -- # keyid=3 00:17:07.964 03:01:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:07.964 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.964 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 03:01:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.964 03:01:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:07.964 03:01:46 -- nvmf/common.sh@717 -- # local ip 00:17:07.964 03:01:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:07.964 03:01:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:07.964 03:01:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.964 03:01:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.964 03:01:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:07.964 03:01:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.964 03:01:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:07.964 03:01:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:07.964 03:01:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:07.964 03:01:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:07.964 03:01:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.964 03:01:46 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 nvme0n1 00:17:07.964 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:07.964 03:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.964 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:07.964 03:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:07.964 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:07.964 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.223 03:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.223 03:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.223 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.223 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.223 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.223 03:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.223 03:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:17:08.223 03:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.223 03:01:47 -- host/auth.sh@44 -- # digest=sha384 00:17:08.223 03:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:08.223 03:01:47 -- host/auth.sh@44 -- # keyid=4 00:17:08.223 03:01:47 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:08.223 03:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.223 03:01:47 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:08.223 03:01:47 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:08.223 03:01:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:17:08.223 03:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.223 03:01:47 -- host/auth.sh@68 -- # digest=sha384 00:17:08.223 03:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:08.223 03:01:47 -- host/auth.sh@68 -- # keyid=4 00:17:08.223 03:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:08.223 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.223 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.223 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.223 03:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.223 03:01:47 -- nvmf/common.sh@717 -- # local ip 00:17:08.223 03:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.223 03:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.224 03:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.224 03:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.224 03:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.224 03:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.224 03:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.224 03:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.224 03:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.224 03:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:08.224 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.224 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.224 nvme0n1 00:17:08.224 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.224 03:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.224 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.224 03:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.224 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.224 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.224 03:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.224 03:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.224 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.224 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.224 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.224 03:01:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.224 03:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.224 03:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:17:08.224 03:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.224 03:01:47 -- host/auth.sh@44 -- # digest=sha384 00:17:08.224 03:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.224 03:01:47 -- host/auth.sh@44 -- # keyid=0 00:17:08.224 03:01:47 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:08.224 03:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.224 03:01:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:08.224 03:01:47 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:08.224 03:01:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:17:08.224 03:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.224 03:01:47 -- host/auth.sh@68 -- # digest=sha384 00:17:08.224 03:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:08.224 03:01:47 -- host/auth.sh@68 -- # keyid=0 00:17:08.224 03:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.224 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.224 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.224 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.224 03:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.224 03:01:47 -- nvmf/common.sh@717 -- # local ip 00:17:08.224 03:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.224 03:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.224 03:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.224 03:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.224 03:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.482 03:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.482 03:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.482 03:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.482 03:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.482 03:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:08.482 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.482 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.482 nvme0n1 00:17:08.482 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.482 03:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.482 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.482 03:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.482 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.482 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.742 03:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.742 03:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:08.742 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.742 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.742 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.742 03:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:08.742 03:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:17:08.742 03:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:08.742 03:01:47 -- host/auth.sh@44 -- # digest=sha384 00:17:08.742 03:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:08.742 03:01:47 -- host/auth.sh@44 -- # keyid=1 00:17:08.742 03:01:47 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:08.742 03:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:08.742 03:01:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:08.742 03:01:47 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:08.742 03:01:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:17:08.742 03:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:08.742 03:01:47 -- host/auth.sh@68 -- # digest=sha384 00:17:08.742 03:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:08.742 03:01:47 -- host/auth.sh@68 -- # keyid=1 00:17:08.742 03:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:08.742 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.742 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.742 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.742 03:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:08.742 03:01:47 -- nvmf/common.sh@717 -- # local ip 00:17:08.742 03:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:08.742 03:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:08.742 03:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:08.742 03:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:08.742 03:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:08.742 03:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:08.742 03:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:08.742 03:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:08.742 03:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:08.742 03:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:08.742 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.742 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.742 nvme0n1 00:17:08.742 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:08.742 03:01:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.742 03:01:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:08.742 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.742 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.001 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.001 03:01:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.001 03:01:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.001 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.001 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.001 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.001 03:01:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.001 03:01:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:17:09.001 03:01:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.001 03:01:47 -- host/auth.sh@44 -- # digest=sha384 00:17:09.001 03:01:47 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.001 03:01:47 -- host/auth.sh@44 -- # keyid=2 00:17:09.001 03:01:47 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:09.001 03:01:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.001 03:01:47 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:09.001 03:01:47 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:09.001 03:01:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:17:09.001 03:01:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.001 03:01:47 -- host/auth.sh@68 -- # digest=sha384 00:17:09.001 03:01:47 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:09.001 03:01:47 -- host/auth.sh@68 -- # keyid=2 00:17:09.001 03:01:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.001 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.001 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.001 03:01:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.001 03:01:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.001 03:01:47 -- nvmf/common.sh@717 -- # local ip 00:17:09.001 03:01:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.001 03:01:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.001 03:01:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.001 03:01:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.001 03:01:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.001 03:01:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.001 03:01:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.001 03:01:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.001 03:01:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.001 03:01:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:09.001 03:01:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.001 03:01:47 -- common/autotest_common.sh@10 -- # set +x 00:17:09.259 nvme0n1 00:17:09.259 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.259 03:01:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.259 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.259 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.259 03:01:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.259 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.259 03:01:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.259 03:01:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.259 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.259 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.259 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.259 03:01:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.259 03:01:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:17:09.259 03:01:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.259 03:01:48 -- host/auth.sh@44 -- # digest=sha384 00:17:09.259 03:01:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.259 03:01:48 -- host/auth.sh@44 -- # keyid=3 00:17:09.259 03:01:48 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:09.259 03:01:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.259 03:01:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:09.259 03:01:48 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:09.259 03:01:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:17:09.259 03:01:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.259 03:01:48 -- host/auth.sh@68 -- # digest=sha384 00:17:09.259 03:01:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:09.259 03:01:48 -- host/auth.sh@68 -- # keyid=3 00:17:09.259 03:01:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.259 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.259 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.259 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.259 03:01:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.259 03:01:48 -- nvmf/common.sh@717 -- # local ip 00:17:09.259 03:01:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.259 03:01:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.259 03:01:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.259 03:01:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.259 03:01:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.259 03:01:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.259 03:01:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.259 03:01:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.259 03:01:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.259 03:01:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:09.259 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.259 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.517 nvme0n1 00:17:09.517 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.517 03:01:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.517 03:01:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.517 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.517 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.517 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.517 03:01:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.517 03:01:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.517 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.517 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.517 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.517 03:01:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.517 03:01:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:17:09.517 03:01:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.517 03:01:48 -- host/auth.sh@44 -- # digest=sha384 00:17:09.517 03:01:48 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:09.517 03:01:48 -- host/auth.sh@44 -- # keyid=4 00:17:09.517 03:01:48 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:09.517 03:01:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.517 03:01:48 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:09.517 03:01:48 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:09.517 03:01:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:17:09.517 03:01:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.517 03:01:48 -- host/auth.sh@68 -- # digest=sha384 00:17:09.517 03:01:48 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:09.517 03:01:48 -- host/auth.sh@68 -- # keyid=4 00:17:09.517 03:01:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:09.517 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.517 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.517 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.517 03:01:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.517 03:01:48 -- nvmf/common.sh@717 -- # local ip 00:17:09.517 03:01:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.517 03:01:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.517 03:01:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.517 03:01:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.517 03:01:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.517 03:01:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.517 03:01:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.517 03:01:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.517 03:01:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.517 03:01:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:09.517 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.517 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 nvme0n1 00:17:09.775 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.775 03:01:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:09.775 03:01:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.775 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.775 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.775 03:01:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.775 03:01:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:09.775 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.775 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.775 03:01:48 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.775 03:01:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:09.775 03:01:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:17:09.775 03:01:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:09.775 03:01:48 -- host/auth.sh@44 -- # digest=sha384 00:17:09.775 03:01:48 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:09.775 03:01:48 -- host/auth.sh@44 -- # keyid=0 00:17:09.775 03:01:48 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:09.775 03:01:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:09.775 03:01:48 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:09.775 03:01:48 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:09.775 03:01:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:17:09.775 03:01:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:09.775 03:01:48 -- host/auth.sh@68 -- # digest=sha384 00:17:09.775 03:01:48 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:09.775 03:01:48 -- host/auth.sh@68 -- # keyid=0 00:17:09.775 03:01:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:09.775 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.775 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:09.775 03:01:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.775 03:01:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:09.775 03:01:48 -- nvmf/common.sh@717 -- # local ip 00:17:09.775 03:01:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:09.775 03:01:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:09.775 03:01:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.775 03:01:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.775 03:01:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:09.775 03:01:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.775 03:01:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:09.775 03:01:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:09.775 03:01:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:09.775 03:01:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:09.775 03:01:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.775 03:01:48 -- common/autotest_common.sh@10 -- # set +x 00:17:10.342 nvme0n1 00:17:10.342 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.342 03:01:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.342 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.342 03:01:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:10.342 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.342 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.342 03:01:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.342 03:01:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.342 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.342 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.342 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.342 03:01:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:10.342 03:01:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:17:10.342 03:01:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:10.342 03:01:49 -- host/auth.sh@44 -- # digest=sha384 00:17:10.342 03:01:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.342 03:01:49 -- host/auth.sh@44 -- # keyid=1 00:17:10.342 03:01:49 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:10.342 03:01:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:10.342 03:01:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:10.342 03:01:49 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:10.342 03:01:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:17:10.342 03:01:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:10.342 03:01:49 -- host/auth.sh@68 -- # digest=sha384 00:17:10.342 03:01:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:10.342 03:01:49 -- host/auth.sh@68 -- # keyid=1 00:17:10.342 03:01:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.342 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.342 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.342 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.342 03:01:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:10.342 03:01:49 -- nvmf/common.sh@717 -- # local ip 00:17:10.342 03:01:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:10.342 03:01:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:10.342 03:01:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.342 03:01:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.342 03:01:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:10.342 03:01:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.342 03:01:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:10.342 03:01:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:10.342 03:01:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:10.342 03:01:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:10.342 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.342 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 nvme0n1 00:17:10.910 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.910 03:01:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.910 03:01:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:10.910 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.910 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.910 03:01:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.910 03:01:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:10.910 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.910 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.910 03:01:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:10.910 03:01:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:17:10.910 03:01:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:10.910 03:01:49 -- host/auth.sh@44 -- # digest=sha384 00:17:10.910 03:01:49 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:10.910 03:01:49 -- host/auth.sh@44 -- # keyid=2 00:17:10.910 03:01:49 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:10.910 03:01:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:10.910 03:01:49 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:10.910 03:01:49 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:10.910 03:01:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:17:10.910 03:01:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:10.910 03:01:49 -- host/auth.sh@68 -- # digest=sha384 00:17:10.910 03:01:49 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:10.910 03:01:49 -- host/auth.sh@68 -- # keyid=2 00:17:10.910 03:01:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.910 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.910 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:10.910 03:01:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.910 03:01:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:10.910 03:01:49 -- nvmf/common.sh@717 -- # local ip 00:17:10.910 03:01:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:10.910 03:01:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:10.910 03:01:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:10.910 03:01:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:10.910 03:01:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:10.910 03:01:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:10.910 03:01:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:10.910 03:01:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:10.910 03:01:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:10.910 03:01:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:10.910 03:01:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.910 03:01:49 -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 nvme0n1 00:17:11.476 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.476 03:01:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.476 03:01:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:11.476 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.476 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.476 03:01:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.476 03:01:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.476 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.476 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.476 03:01:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:11.476 03:01:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:17:11.476 03:01:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:11.476 03:01:50 -- host/auth.sh@44 -- # digest=sha384 00:17:11.476 03:01:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.476 03:01:50 -- host/auth.sh@44 -- # keyid=3 00:17:11.476 03:01:50 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:11.476 03:01:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:11.476 03:01:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:11.476 03:01:50 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:11.476 03:01:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:17:11.476 03:01:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:11.476 03:01:50 -- host/auth.sh@68 -- # digest=sha384 00:17:11.476 03:01:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:11.476 03:01:50 -- host/auth.sh@68 -- # keyid=3 00:17:11.476 03:01:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.476 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.476 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.476 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.476 03:01:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:11.476 03:01:50 -- nvmf/common.sh@717 -- # local ip 00:17:11.476 03:01:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:11.476 03:01:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:11.476 03:01:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.476 03:01:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.476 03:01:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:11.476 03:01:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.476 03:01:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:11.476 03:01:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:11.476 03:01:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:11.476 03:01:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:11.476 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.476 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.735 nvme0n1 00:17:11.735 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.735 03:01:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:11.735 03:01:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:11.735 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.735 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.735 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.735 03:01:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.735 03:01:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:11.735 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.735 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.994 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.994 03:01:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:11.994 03:01:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:17:11.994 03:01:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:11.994 03:01:50 -- host/auth.sh@44 -- # digest=sha384 00:17:11.994 03:01:50 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:11.994 03:01:50 -- host/auth.sh@44 -- # keyid=4 00:17:11.994 03:01:50 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:11.994 03:01:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:11.994 03:01:50 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:11.994 03:01:50 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:11.994 03:01:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:17:11.994 03:01:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:11.994 03:01:50 -- host/auth.sh@68 -- # digest=sha384 00:17:11.994 03:01:50 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:11.994 03:01:50 -- host/auth.sh@68 -- # keyid=4 00:17:11.994 03:01:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:11.994 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.994 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:11.994 03:01:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:11.994 03:01:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:11.994 03:01:50 -- nvmf/common.sh@717 -- # local ip 00:17:11.994 03:01:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:11.994 03:01:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:11.994 03:01:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:11.994 03:01:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:11.994 03:01:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:11.994 03:01:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:11.994 03:01:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:11.994 03:01:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:11.994 03:01:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:11.994 03:01:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:11.994 03:01:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:11.994 03:01:50 -- common/autotest_common.sh@10 -- # set +x 00:17:12.253 nvme0n1 00:17:12.253 03:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.253 03:01:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:12.253 03:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.253 03:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:12.253 03:01:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:12.253 03:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.511 03:01:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.511 03:01:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:12.511 03:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.511 03:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:12.511 03:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.511 03:01:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:12.511 03:01:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:12.511 03:01:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:17:12.511 03:01:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:12.511 03:01:51 -- host/auth.sh@44 -- # digest=sha384 00:17:12.511 03:01:51 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:12.511 03:01:51 -- host/auth.sh@44 -- # keyid=0 00:17:12.511 03:01:51 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:12.511 03:01:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:12.511 03:01:51 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:12.511 03:01:51 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:12.511 03:01:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:17:12.511 03:01:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:12.511 03:01:51 -- host/auth.sh@68 -- # digest=sha384 00:17:12.511 03:01:51 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:12.511 03:01:51 -- host/auth.sh@68 -- # keyid=0 00:17:12.511 03:01:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:12.511 03:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.511 03:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:12.511 03:01:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:12.511 03:01:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:12.511 03:01:51 -- nvmf/common.sh@717 -- # local ip 00:17:12.511 03:01:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:12.511 03:01:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:12.512 03:01:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:12.512 03:01:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:12.512 03:01:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:12.512 03:01:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:12.512 03:01:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:12.512 03:01:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:12.512 03:01:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:12.512 03:01:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:12.512 03:01:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:12.512 03:01:51 -- common/autotest_common.sh@10 -- # set +x 00:17:13.080 nvme0n1 00:17:13.080 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.080 03:01:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.080 03:01:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:13.080 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.080 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.080 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.080 03:01:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.080 03:01:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.080 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.080 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.080 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.080 03:01:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:13.080 03:01:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:17:13.080 03:01:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:13.080 03:01:52 -- host/auth.sh@44 -- # digest=sha384 00:17:13.080 03:01:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.080 03:01:52 -- host/auth.sh@44 -- # keyid=1 00:17:13.080 03:01:52 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:13.080 03:01:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:13.080 03:01:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:13.080 03:01:52 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:13.080 03:01:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:17:13.080 03:01:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:13.080 03:01:52 -- host/auth.sh@68 -- # digest=sha384 00:17:13.080 03:01:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:13.080 03:01:52 -- host/auth.sh@68 -- # keyid=1 00:17:13.080 03:01:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.081 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.081 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.081 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.081 03:01:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:13.081 03:01:52 -- nvmf/common.sh@717 -- # local ip 00:17:13.081 03:01:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:13.081 03:01:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:13.081 03:01:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.081 03:01:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.081 03:01:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:13.081 03:01:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.081 03:01:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:13.081 03:01:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:13.081 03:01:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:13.081 03:01:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:13.081 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.081 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.649 nvme0n1 00:17:13.649 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.649 03:01:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:13.649 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.649 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.649 03:01:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:13.649 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.649 03:01:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.649 03:01:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:13.649 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.649 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.649 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.650 03:01:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:13.650 03:01:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:17:13.650 03:01:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:13.650 03:01:52 -- host/auth.sh@44 -- # digest=sha384 00:17:13.650 03:01:52 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:13.650 03:01:52 -- host/auth.sh@44 -- # keyid=2 00:17:13.650 03:01:52 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:13.650 03:01:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:13.650 03:01:52 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:13.650 03:01:52 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:13.650 03:01:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:17:13.650 03:01:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:13.650 03:01:52 -- host/auth.sh@68 -- # digest=sha384 00:17:13.650 03:01:52 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:13.650 03:01:52 -- host/auth.sh@68 -- # keyid=2 00:17:13.650 03:01:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:13.650 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.650 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:13.650 03:01:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:13.650 03:01:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:13.650 03:01:52 -- nvmf/common.sh@717 -- # local ip 00:17:13.650 03:01:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:13.650 03:01:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:13.650 03:01:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:13.650 03:01:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:13.650 03:01:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:13.650 03:01:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:13.650 03:01:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:13.650 03:01:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:13.650 03:01:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:13.650 03:01:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:13.650 03:01:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:13.650 03:01:52 -- common/autotest_common.sh@10 -- # set +x 00:17:14.598 nvme0n1 00:17:14.598 03:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.598 03:01:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:14.598 03:01:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:14.598 03:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.598 03:01:53 -- common/autotest_common.sh@10 -- # set +x 00:17:14.598 03:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.598 03:01:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.598 03:01:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:14.598 03:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.598 03:01:53 -- common/autotest_common.sh@10 -- # set +x 00:17:14.598 03:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.598 03:01:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:14.598 03:01:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:17:14.598 03:01:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:14.598 03:01:53 -- host/auth.sh@44 -- # digest=sha384 00:17:14.598 03:01:53 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:14.598 03:01:53 -- host/auth.sh@44 -- # keyid=3 00:17:14.598 03:01:53 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:14.598 03:01:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:14.598 03:01:53 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:14.598 03:01:53 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:14.598 03:01:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:17:14.598 03:01:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:14.598 03:01:53 -- host/auth.sh@68 -- # digest=sha384 00:17:14.598 03:01:53 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:14.598 03:01:53 -- host/auth.sh@68 -- # keyid=3 00:17:14.598 03:01:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:14.598 03:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.598 03:01:53 -- common/autotest_common.sh@10 -- # set +x 00:17:14.598 03:01:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:14.598 03:01:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:14.598 03:01:53 -- nvmf/common.sh@717 -- # local ip 00:17:14.598 03:01:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:14.598 03:01:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:14.598 03:01:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:14.598 03:01:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:14.598 03:01:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:14.598 03:01:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:14.598 03:01:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:14.598 03:01:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:14.598 03:01:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:14.598 03:01:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:14.598 03:01:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:14.598 03:01:53 -- common/autotest_common.sh@10 -- # set +x 00:17:15.192 nvme0n1 00:17:15.192 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.192 03:01:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.192 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.192 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.192 03:01:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.192 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.192 03:01:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.192 03:01:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.193 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.193 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.193 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.193 03:01:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.193 03:01:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:17:15.193 03:01:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.193 03:01:54 -- host/auth.sh@44 -- # digest=sha384 00:17:15.193 03:01:54 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:15.193 03:01:54 -- host/auth.sh@44 -- # keyid=4 00:17:15.193 03:01:54 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:15.193 03:01:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:17:15.193 03:01:54 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:15.193 03:01:54 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:15.193 03:01:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:17:15.193 03:01:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.193 03:01:54 -- host/auth.sh@68 -- # digest=sha384 00:17:15.193 03:01:54 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:15.193 03:01:54 -- host/auth.sh@68 -- # keyid=4 00:17:15.193 03:01:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.193 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.193 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.193 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.193 03:01:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.193 03:01:54 -- nvmf/common.sh@717 -- # local ip 00:17:15.193 03:01:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.193 03:01:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.193 03:01:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.193 03:01:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.193 03:01:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.193 03:01:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.193 03:01:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.193 03:01:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.193 03:01:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.193 03:01:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:15.193 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.193 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 nvme0n1 00:17:15.763 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.763 03:01:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.763 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.763 03:01:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.763 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.763 03:01:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.763 03:01:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:15.763 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.763 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.763 03:01:54 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:17:15.763 03:01:54 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.763 03:01:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:15.763 03:01:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:17:15.763 03:01:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:15.763 03:01:54 -- host/auth.sh@44 -- # digest=sha512 00:17:15.763 03:01:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:15.763 03:01:54 -- host/auth.sh@44 -- # keyid=0 00:17:15.763 03:01:54 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:15.763 03:01:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:15.763 03:01:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:15.763 03:01:54 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:15.763 03:01:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:17:15.763 03:01:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:15.763 03:01:54 -- host/auth.sh@68 -- # digest=sha512 00:17:15.763 03:01:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:15.763 03:01:54 -- host/auth.sh@68 -- # keyid=0 00:17:15.763 03:01:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:15.763 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.763 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.763 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.763 03:01:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:15.763 03:01:54 -- nvmf/common.sh@717 -- # local ip 00:17:15.763 03:01:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:15.763 03:01:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:15.763 03:01:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:15.763 03:01:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:15.763 03:01:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:15.763 03:01:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:15.763 03:01:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:15.763 03:01:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:15.763 03:01:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:15.763 03:01:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:15.763 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.764 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:15.764 nvme0n1 00:17:15.764 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:15.764 03:01:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:15.764 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:15.764 03:01:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:15.764 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:16.029 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.029 03:01:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.029 03:01:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.029 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.029 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:16.029 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.029 03:01:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.029 03:01:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:17:16.029 03:01:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.029 03:01:54 -- host/auth.sh@44 -- # digest=sha512 00:17:16.030 03:01:54 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.030 03:01:54 -- host/auth.sh@44 -- # keyid=1 00:17:16.030 03:01:54 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:16.030 03:01:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.030 03:01:54 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:16.030 03:01:54 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:16.030 03:01:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:17:16.030 03:01:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.030 03:01:54 -- host/auth.sh@68 -- # digest=sha512 00:17:16.030 03:01:54 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:16.030 03:01:54 -- host/auth.sh@68 -- # keyid=1 00:17:16.030 03:01:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.030 03:01:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.030 03:01:54 -- common/autotest_common.sh@10 -- # set +x 00:17:16.030 03:01:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.030 03:01:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.030 03:01:54 -- nvmf/common.sh@717 -- # local ip 00:17:16.030 03:01:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.030 03:01:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.030 03:01:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.030 03:01:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.030 03:01:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.030 03:01:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.030 03:01:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.030 03:01:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.030 03:01:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.030 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:16.030 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.030 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.030 nvme0n1 00:17:16.030 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.030 03:01:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.030 03:01:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.030 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.030 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.030 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.030 03:01:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.030 03:01:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.030 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.030 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.030 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.030 03:01:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.030 03:01:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:17:16.030 03:01:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.031 03:01:55 -- host/auth.sh@44 -- # digest=sha512 00:17:16.031 03:01:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.031 03:01:55 -- host/auth.sh@44 -- # keyid=2 00:17:16.031 03:01:55 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:16.031 03:01:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.031 03:01:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:16.031 03:01:55 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:16.031 03:01:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:17:16.031 03:01:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.031 03:01:55 -- host/auth.sh@68 -- # digest=sha512 00:17:16.031 03:01:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:16.031 03:01:55 -- host/auth.sh@68 -- # keyid=2 00:17:16.031 03:01:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.031 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.031 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.031 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.031 03:01:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.031 03:01:55 -- nvmf/common.sh@717 -- # local ip 00:17:16.031 03:01:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.031 03:01:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.031 03:01:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.031 03:01:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.031 03:01:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.031 03:01:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.031 03:01:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.031 03:01:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.031 03:01:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.031 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:16.031 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.031 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.292 nvme0n1 00:17:16.292 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.292 03:01:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.292 03:01:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.292 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.292 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.292 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.292 03:01:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.292 03:01:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.292 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.292 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.292 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.292 03:01:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.292 03:01:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:17:16.292 03:01:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.292 03:01:55 -- host/auth.sh@44 -- # digest=sha512 00:17:16.292 03:01:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.292 03:01:55 -- host/auth.sh@44 -- # keyid=3 00:17:16.292 03:01:55 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:16.292 03:01:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.292 03:01:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:16.292 03:01:55 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:16.292 03:01:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:17:16.292 03:01:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.292 03:01:55 -- host/auth.sh@68 -- # digest=sha512 00:17:16.292 03:01:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:16.292 03:01:55 -- host/auth.sh@68 -- # keyid=3 00:17:16.292 03:01:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.292 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.292 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.292 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.292 03:01:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.292 03:01:55 -- nvmf/common.sh@717 -- # local ip 00:17:16.292 03:01:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.292 03:01:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.292 03:01:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.292 03:01:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.292 03:01:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.292 03:01:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.292 03:01:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.292 03:01:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.292 03:01:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.292 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:16.292 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.292 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 nvme0n1 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.551 03:01:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.551 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.551 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.551 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.551 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.551 03:01:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:17:16.551 03:01:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.551 03:01:55 -- host/auth.sh@44 -- # digest=sha512 00:17:16.551 03:01:55 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:16.551 03:01:55 -- host/auth.sh@44 -- # keyid=4 00:17:16.551 03:01:55 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:16.551 03:01:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.551 03:01:55 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:16.551 03:01:55 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:16.551 03:01:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:17:16.551 03:01:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.551 03:01:55 -- host/auth.sh@68 -- # digest=sha512 00:17:16.551 03:01:55 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:17:16.551 03:01:55 -- host/auth.sh@68 -- # keyid=4 00:17:16.551 03:01:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:16.551 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.551 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.551 03:01:55 -- nvmf/common.sh@717 -- # local ip 00:17:16.551 03:01:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.551 03:01:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.551 03:01:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.551 03:01:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.551 03:01:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.551 03:01:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.551 03:01:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.551 03:01:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.551 03:01:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.551 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:16.551 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.551 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 nvme0n1 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.551 03:01:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.551 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.551 03:01:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.551 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.551 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.810 03:01:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.810 03:01:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.810 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.810 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.810 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.810 03:01:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:16.810 03:01:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.810 03:01:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:17:16.810 03:01:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.810 03:01:55 -- host/auth.sh@44 -- # digest=sha512 00:17:16.810 03:01:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.810 03:01:55 -- host/auth.sh@44 -- # keyid=0 00:17:16.810 03:01:55 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:16.810 03:01:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.810 03:01:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:16.810 03:01:55 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:16.810 03:01:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:17:16.810 03:01:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.810 03:01:55 -- host/auth.sh@68 -- # digest=sha512 00:17:16.810 03:01:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:16.810 03:01:55 -- host/auth.sh@68 -- # keyid=0 00:17:16.810 03:01:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.810 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.810 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.810 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.810 03:01:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.810 03:01:55 -- nvmf/common.sh@717 -- # local ip 00:17:16.810 03:01:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.810 03:01:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.810 03:01:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.810 03:01:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.810 03:01:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.810 03:01:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.810 03:01:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.810 03:01:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.810 03:01:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.810 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:16.810 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.810 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.810 nvme0n1 00:17:16.810 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.811 03:01:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:16.811 03:01:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:16.811 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.811 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.811 03:01:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.811 03:01:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:16.811 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.811 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.811 03:01:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:16.811 03:01:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:17:16.811 03:01:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:16.811 03:01:55 -- host/auth.sh@44 -- # digest=sha512 00:17:16.811 03:01:55 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:16.811 03:01:55 -- host/auth.sh@44 -- # keyid=1 00:17:16.811 03:01:55 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:16.811 03:01:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:16.811 03:01:55 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:16.811 03:01:55 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:16.811 03:01:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:17:16.811 03:01:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:16.811 03:01:55 -- host/auth.sh@68 -- # digest=sha512 00:17:16.811 03:01:55 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:16.811 03:01:55 -- host/auth.sh@68 -- # keyid=1 00:17:16.811 03:01:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.811 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.811 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:16.811 03:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:16.811 03:01:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:16.811 03:01:55 -- nvmf/common.sh@717 -- # local ip 00:17:16.811 03:01:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:16.811 03:01:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:16.811 03:01:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:16.811 03:01:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:16.811 03:01:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:16.811 03:01:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:16.811 03:01:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:16.811 03:01:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:16.811 03:01:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:16.811 03:01:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:16.811 03:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:16.811 03:01:55 -- common/autotest_common.sh@10 -- # set +x 00:17:17.070 nvme0n1 00:17:17.070 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.070 03:01:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.070 03:01:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.070 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.070 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.070 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.070 03:01:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.070 03:01:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.070 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.070 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.070 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.070 03:01:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.070 03:01:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:17:17.070 03:01:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.070 03:01:56 -- host/auth.sh@44 -- # digest=sha512 00:17:17.070 03:01:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.070 03:01:56 -- host/auth.sh@44 -- # keyid=2 00:17:17.070 03:01:56 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:17.070 03:01:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.070 03:01:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:17.070 03:01:56 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:17.070 03:01:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:17:17.070 03:01:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.070 03:01:56 -- host/auth.sh@68 -- # digest=sha512 00:17:17.070 03:01:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:17.070 03:01:56 -- host/auth.sh@68 -- # keyid=2 00:17:17.070 03:01:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.070 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.070 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.070 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.070 03:01:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.070 03:01:56 -- nvmf/common.sh@717 -- # local ip 00:17:17.070 03:01:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.070 03:01:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.070 03:01:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.070 03:01:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.070 03:01:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.070 03:01:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.070 03:01:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.070 03:01:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.070 03:01:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.070 03:01:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:17.070 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.070 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.328 nvme0n1 00:17:17.328 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.328 03:01:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.328 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.328 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.328 03:01:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.328 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.328 03:01:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.328 03:01:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.328 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.328 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.328 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.328 03:01:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.328 03:01:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:17.328 03:01:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.328 03:01:56 -- host/auth.sh@44 -- # digest=sha512 00:17:17.328 03:01:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.328 03:01:56 -- host/auth.sh@44 -- # keyid=3 00:17:17.328 03:01:56 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:17.328 03:01:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.328 03:01:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:17.328 03:01:56 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:17.328 03:01:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:17:17.328 03:01:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.328 03:01:56 -- host/auth.sh@68 -- # digest=sha512 00:17:17.328 03:01:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:17.328 03:01:56 -- host/auth.sh@68 -- # keyid=3 00:17:17.328 03:01:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.328 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.328 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.328 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.328 03:01:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.328 03:01:56 -- nvmf/common.sh@717 -- # local ip 00:17:17.328 03:01:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.328 03:01:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.328 03:01:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.328 03:01:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.328 03:01:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.328 03:01:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.328 03:01:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.328 03:01:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.328 03:01:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.328 03:01:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:17.328 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.328 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 nvme0n1 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.585 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.585 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 03:01:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.585 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.585 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.585 03:01:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:17.585 03:01:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.585 03:01:56 -- host/auth.sh@44 -- # digest=sha512 00:17:17.585 03:01:56 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:17.585 03:01:56 -- host/auth.sh@44 -- # keyid=4 00:17:17.585 03:01:56 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:17.585 03:01:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.585 03:01:56 -- host/auth.sh@48 -- # echo ffdhe3072 00:17:17.585 03:01:56 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:17.585 03:01:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:17:17.585 03:01:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.585 03:01:56 -- host/auth.sh@68 -- # digest=sha512 00:17:17.585 03:01:56 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:17:17.585 03:01:56 -- host/auth.sh@68 -- # keyid=4 00:17:17.585 03:01:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.585 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.585 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.585 03:01:56 -- nvmf/common.sh@717 -- # local ip 00:17:17.585 03:01:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.585 03:01:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.585 03:01:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.585 03:01:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.585 03:01:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.585 03:01:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.585 03:01:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.585 03:01:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.585 03:01:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.585 03:01:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:17.585 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.585 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 nvme0n1 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.585 03:01:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.585 03:01:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.585 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.585 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.843 03:01:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.843 03:01:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:17.843 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.843 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.843 03:01:56 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:17.843 03:01:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:17.843 03:01:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:17.843 03:01:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:17.843 03:01:56 -- host/auth.sh@44 -- # digest=sha512 00:17:17.843 03:01:56 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:17.843 03:01:56 -- host/auth.sh@44 -- # keyid=0 00:17:17.843 03:01:56 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:17.843 03:01:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:17.843 03:01:56 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:17.843 03:01:56 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:17.843 03:01:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:17:17.843 03:01:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:17.843 03:01:56 -- host/auth.sh@68 -- # digest=sha512 00:17:17.843 03:01:56 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:17.843 03:01:56 -- host/auth.sh@68 -- # keyid=0 00:17:17.843 03:01:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:17.843 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.843 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.843 03:01:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:17.843 03:01:56 -- nvmf/common.sh@717 -- # local ip 00:17:17.843 03:01:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:17.843 03:01:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:17.843 03:01:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:17.843 03:01:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:17.843 03:01:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:17.843 03:01:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:17.843 03:01:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:17.843 03:01:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:17.843 03:01:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:17.843 03:01:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:17.843 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.843 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 nvme0n1 00:17:17.843 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:17.843 03:01:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:17.843 03:01:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:17.843 03:01:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:17.843 03:01:56 -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 03:01:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.101 03:01:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.101 03:01:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.101 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.101 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.101 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.101 03:01:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.101 03:01:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:18.102 03:01:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.102 03:01:57 -- host/auth.sh@44 -- # digest=sha512 00:17:18.102 03:01:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.102 03:01:57 -- host/auth.sh@44 -- # keyid=1 00:17:18.102 03:01:57 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:18.102 03:01:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.102 03:01:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:18.102 03:01:57 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:18.102 03:01:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:17:18.102 03:01:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.102 03:01:57 -- host/auth.sh@68 -- # digest=sha512 00:17:18.102 03:01:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:18.102 03:01:57 -- host/auth.sh@68 -- # keyid=1 00:17:18.102 03:01:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.102 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.102 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.102 03:01:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.102 03:01:57 -- nvmf/common.sh@717 -- # local ip 00:17:18.102 03:01:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.102 03:01:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.102 03:01:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.102 03:01:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.102 03:01:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.102 03:01:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.102 03:01:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.102 03:01:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.102 03:01:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.102 03:01:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:18.102 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.102 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 nvme0n1 00:17:18.102 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.102 03:01:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.102 03:01:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:18.102 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.102 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.361 03:01:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.361 03:01:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.361 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.361 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.361 03:01:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.361 03:01:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:18.361 03:01:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.361 03:01:57 -- host/auth.sh@44 -- # digest=sha512 00:17:18.361 03:01:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.361 03:01:57 -- host/auth.sh@44 -- # keyid=2 00:17:18.361 03:01:57 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:18.361 03:01:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.361 03:01:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:18.361 03:01:57 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:18.361 03:01:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:17:18.361 03:01:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.361 03:01:57 -- host/auth.sh@68 -- # digest=sha512 00:17:18.361 03:01:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:18.361 03:01:57 -- host/auth.sh@68 -- # keyid=2 00:17:18.361 03:01:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.361 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.361 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.361 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.361 03:01:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.361 03:01:57 -- nvmf/common.sh@717 -- # local ip 00:17:18.361 03:01:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.361 03:01:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.361 03:01:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.361 03:01:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.361 03:01:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.361 03:01:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.361 03:01:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.361 03:01:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.361 03:01:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.361 03:01:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:18.361 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.361 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.619 nvme0n1 00:17:18.619 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.619 03:01:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.619 03:01:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:18.619 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.619 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.619 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.619 03:01:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.619 03:01:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.619 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.619 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.619 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.619 03:01:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.619 03:01:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:18.619 03:01:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.619 03:01:57 -- host/auth.sh@44 -- # digest=sha512 00:17:18.619 03:01:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.620 03:01:57 -- host/auth.sh@44 -- # keyid=3 00:17:18.620 03:01:57 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:18.620 03:01:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.620 03:01:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:18.620 03:01:57 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:18.620 03:01:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:17:18.620 03:01:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.620 03:01:57 -- host/auth.sh@68 -- # digest=sha512 00:17:18.620 03:01:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:18.620 03:01:57 -- host/auth.sh@68 -- # keyid=3 00:17:18.620 03:01:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.620 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.620 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.620 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.620 03:01:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.620 03:01:57 -- nvmf/common.sh@717 -- # local ip 00:17:18.620 03:01:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.620 03:01:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.620 03:01:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.620 03:01:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.620 03:01:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.620 03:01:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.620 03:01:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.620 03:01:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.620 03:01:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.620 03:01:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:18.620 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.620 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.878 nvme0n1 00:17:18.878 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.878 03:01:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:18.878 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.878 03:01:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:18.878 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.878 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.878 03:01:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.878 03:01:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:18.878 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.878 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.878 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.878 03:01:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:18.878 03:01:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:18.878 03:01:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:18.878 03:01:57 -- host/auth.sh@44 -- # digest=sha512 00:17:18.878 03:01:57 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:18.878 03:01:57 -- host/auth.sh@44 -- # keyid=4 00:17:18.878 03:01:57 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:18.878 03:01:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:18.878 03:01:57 -- host/auth.sh@48 -- # echo ffdhe4096 00:17:18.878 03:01:57 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:18.878 03:01:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:17:18.878 03:01:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:18.878 03:01:57 -- host/auth.sh@68 -- # digest=sha512 00:17:18.878 03:01:57 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:17:18.878 03:01:57 -- host/auth.sh@68 -- # keyid=4 00:17:18.878 03:01:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:18.878 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.878 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:18.878 03:01:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:18.878 03:01:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:18.878 03:01:57 -- nvmf/common.sh@717 -- # local ip 00:17:18.878 03:01:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:18.878 03:01:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:18.878 03:01:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:18.878 03:01:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:18.878 03:01:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:18.878 03:01:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:18.878 03:01:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:18.878 03:01:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:18.878 03:01:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:18.878 03:01:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:18.878 03:01:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:18.878 03:01:57 -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 nvme0n1 00:17:19.137 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.137 03:01:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.137 03:01:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:19.137 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.137 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.137 03:01:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.137 03:01:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.137 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.137 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.137 03:01:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:19.137 03:01:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:19.137 03:01:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:19.137 03:01:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:19.137 03:01:58 -- host/auth.sh@44 -- # digest=sha512 00:17:19.137 03:01:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.137 03:01:58 -- host/auth.sh@44 -- # keyid=0 00:17:19.137 03:01:58 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:19.137 03:01:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:19.137 03:01:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:19.137 03:01:58 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:19.137 03:01:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:17:19.137 03:01:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:19.137 03:01:58 -- host/auth.sh@68 -- # digest=sha512 00:17:19.137 03:01:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:19.137 03:01:58 -- host/auth.sh@68 -- # keyid=0 00:17:19.137 03:01:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.137 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.137 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.137 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.137 03:01:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:19.137 03:01:58 -- nvmf/common.sh@717 -- # local ip 00:17:19.137 03:01:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:19.137 03:01:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:19.137 03:01:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.137 03:01:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.137 03:01:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:19.137 03:01:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.137 03:01:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:19.137 03:01:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:19.137 03:01:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:19.137 03:01:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:19.137 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.137 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 nvme0n1 00:17:19.394 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.394 03:01:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:19.394 03:01:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.394 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.394 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.394 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.653 03:01:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.653 03:01:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.653 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.653 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.653 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.653 03:01:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:19.653 03:01:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:19.653 03:01:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:19.653 03:01:58 -- host/auth.sh@44 -- # digest=sha512 00:17:19.653 03:01:58 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.653 03:01:58 -- host/auth.sh@44 -- # keyid=1 00:17:19.653 03:01:58 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:19.653 03:01:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:19.653 03:01:58 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:19.653 03:01:58 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:19.653 03:01:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:17:19.653 03:01:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:19.653 03:01:58 -- host/auth.sh@68 -- # digest=sha512 00:17:19.653 03:01:58 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:19.653 03:01:58 -- host/auth.sh@68 -- # keyid=1 00:17:19.653 03:01:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.653 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.653 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.653 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.653 03:01:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:19.653 03:01:58 -- nvmf/common.sh@717 -- # local ip 00:17:19.653 03:01:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:19.653 03:01:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:19.653 03:01:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:19.653 03:01:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:19.653 03:01:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:19.653 03:01:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:19.653 03:01:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:19.653 03:01:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:19.653 03:01:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:19.653 03:01:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:19.653 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.653 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.910 nvme0n1 00:17:19.910 03:01:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.910 03:01:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:19.910 03:01:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.910 03:01:58 -- common/autotest_common.sh@10 -- # set +x 00:17:19.910 03:01:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:19.910 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.910 03:01:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.910 03:01:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:19.910 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.910 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:19.910 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:19.910 03:01:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:19.910 03:01:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:19.910 03:01:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:19.910 03:01:59 -- host/auth.sh@44 -- # digest=sha512 00:17:19.910 03:01:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:19.910 03:01:59 -- host/auth.sh@44 -- # keyid=2 00:17:19.910 03:01:59 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:19.910 03:01:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:19.910 03:01:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:19.910 03:01:59 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:19.910 03:01:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:17:19.910 03:01:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:19.910 03:01:59 -- host/auth.sh@68 -- # digest=sha512 00:17:19.910 03:01:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:19.910 03:01:59 -- host/auth.sh@68 -- # keyid=2 00:17:19.910 03:01:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:19.910 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:19.910 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.167 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.167 03:01:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:20.167 03:01:59 -- nvmf/common.sh@717 -- # local ip 00:17:20.167 03:01:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:20.167 03:01:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:20.167 03:01:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.167 03:01:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.167 03:01:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:20.167 03:01:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.167 03:01:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:20.167 03:01:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:20.167 03:01:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:20.167 03:01:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:20.167 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.167 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.424 nvme0n1 00:17:20.424 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.424 03:01:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.424 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.424 03:01:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:20.424 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.424 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.424 03:01:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.424 03:01:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.424 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.424 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.424 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.424 03:01:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:20.424 03:01:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:20.424 03:01:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:20.424 03:01:59 -- host/auth.sh@44 -- # digest=sha512 00:17:20.424 03:01:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.424 03:01:59 -- host/auth.sh@44 -- # keyid=3 00:17:20.424 03:01:59 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:20.424 03:01:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:20.424 03:01:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:20.424 03:01:59 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:20.424 03:01:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:17:20.424 03:01:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:20.424 03:01:59 -- host/auth.sh@68 -- # digest=sha512 00:17:20.424 03:01:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:20.424 03:01:59 -- host/auth.sh@68 -- # keyid=3 00:17:20.424 03:01:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.424 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.424 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.424 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.424 03:01:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:20.424 03:01:59 -- nvmf/common.sh@717 -- # local ip 00:17:20.424 03:01:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:20.424 03:01:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:20.424 03:01:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.424 03:01:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.424 03:01:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:20.424 03:01:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.424 03:01:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:20.424 03:01:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:20.424 03:01:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:20.424 03:01:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:20.424 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.424 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.990 nvme0n1 00:17:20.990 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.990 03:01:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:20.990 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.990 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.990 03:01:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:20.990 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.990 03:01:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.990 03:01:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:20.990 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.990 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.990 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.990 03:01:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:20.990 03:01:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:20.990 03:01:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:20.990 03:01:59 -- host/auth.sh@44 -- # digest=sha512 00:17:20.990 03:01:59 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:20.990 03:01:59 -- host/auth.sh@44 -- # keyid=4 00:17:20.990 03:01:59 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:20.990 03:01:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:20.990 03:01:59 -- host/auth.sh@48 -- # echo ffdhe6144 00:17:20.990 03:01:59 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:20.990 03:01:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:17:20.990 03:01:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:20.990 03:01:59 -- host/auth.sh@68 -- # digest=sha512 00:17:20.990 03:01:59 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:17:20.990 03:01:59 -- host/auth.sh@68 -- # keyid=4 00:17:20.990 03:01:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:20.990 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.990 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.990 03:01:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:20.990 03:01:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:20.990 03:01:59 -- nvmf/common.sh@717 -- # local ip 00:17:20.990 03:01:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:20.990 03:01:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:20.990 03:01:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:20.990 03:01:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:20.990 03:01:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:20.990 03:01:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:20.990 03:01:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:20.990 03:01:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:20.990 03:01:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:20.990 03:01:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:20.990 03:01:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:20.990 03:01:59 -- common/autotest_common.sh@10 -- # set +x 00:17:21.248 nvme0n1 00:17:21.248 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.248 03:02:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.248 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.248 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.248 03:02:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:21.248 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.248 03:02:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.248 03:02:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.248 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.248 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.248 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.248 03:02:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.248 03:02:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:21.248 03:02:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:21.248 03:02:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:21.248 03:02:00 -- host/auth.sh@44 -- # digest=sha512 00:17:21.248 03:02:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:21.248 03:02:00 -- host/auth.sh@44 -- # keyid=0 00:17:21.248 03:02:00 -- host/auth.sh@45 -- # key=DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:21.248 03:02:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:21.248 03:02:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:21.248 03:02:00 -- host/auth.sh@49 -- # echo DHHC-1:00:MjU3NjEyNzkyNDgyOTViOWM0YjFmNzQ1ODkwOTRmZTYEwvb9: 00:17:21.248 03:02:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:17:21.248 03:02:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:21.248 03:02:00 -- host/auth.sh@68 -- # digest=sha512 00:17:21.248 03:02:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:21.248 03:02:00 -- host/auth.sh@68 -- # keyid=0 00:17:21.248 03:02:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:21.248 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.248 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.248 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.248 03:02:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:21.248 03:02:00 -- nvmf/common.sh@717 -- # local ip 00:17:21.248 03:02:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:21.248 03:02:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:21.248 03:02:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:21.248 03:02:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:21.248 03:02:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:21.248 03:02:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:21.248 03:02:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:21.248 03:02:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:21.248 03:02:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:21.248 03:02:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:17:21.248 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.248 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.814 nvme0n1 00:17:21.814 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.814 03:02:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:21.814 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.814 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.814 03:02:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:21.814 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:21.814 03:02:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.814 03:02:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:21.814 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:21.814 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:22.072 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.072 03:02:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:22.072 03:02:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:22.072 03:02:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:22.072 03:02:00 -- host/auth.sh@44 -- # digest=sha512 00:17:22.072 03:02:00 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.072 03:02:00 -- host/auth.sh@44 -- # keyid=1 00:17:22.072 03:02:00 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:22.072 03:02:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:22.072 03:02:00 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:22.072 03:02:00 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:22.072 03:02:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:17:22.072 03:02:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:22.072 03:02:00 -- host/auth.sh@68 -- # digest=sha512 00:17:22.072 03:02:00 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:22.072 03:02:00 -- host/auth.sh@68 -- # keyid=1 00:17:22.072 03:02:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.072 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.072 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:22.072 03:02:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.072 03:02:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:22.072 03:02:00 -- nvmf/common.sh@717 -- # local ip 00:17:22.072 03:02:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:22.072 03:02:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:22.072 03:02:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.073 03:02:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.073 03:02:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:22.073 03:02:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.073 03:02:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:22.073 03:02:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:22.073 03:02:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:22.073 03:02:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:17:22.073 03:02:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.073 03:02:00 -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 nvme0n1 00:17:22.669 03:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.669 03:02:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:22.669 03:02:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:22.669 03:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.669 03:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 03:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.669 03:02:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.669 03:02:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:22.669 03:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.669 03:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 03:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.669 03:02:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:22.669 03:02:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:22.669 03:02:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:22.669 03:02:01 -- host/auth.sh@44 -- # digest=sha512 00:17:22.669 03:02:01 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:22.669 03:02:01 -- host/auth.sh@44 -- # keyid=2 00:17:22.669 03:02:01 -- host/auth.sh@45 -- # key=DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:22.669 03:02:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:22.669 03:02:01 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:22.669 03:02:01 -- host/auth.sh@49 -- # echo DHHC-1:01:NTYyZDA4MjIzMTUxMTgzOTk3NWQyNGZlYWYxM2NlMjMpzwD4: 00:17:22.669 03:02:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:17:22.669 03:02:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:22.669 03:02:01 -- host/auth.sh@68 -- # digest=sha512 00:17:22.669 03:02:01 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:22.669 03:02:01 -- host/auth.sh@68 -- # keyid=2 00:17:22.669 03:02:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:22.669 03:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.669 03:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 03:02:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:22.669 03:02:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:22.669 03:02:01 -- nvmf/common.sh@717 -- # local ip 00:17:22.669 03:02:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:22.669 03:02:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:22.669 03:02:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:22.669 03:02:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:22.669 03:02:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:22.669 03:02:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:22.669 03:02:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:22.669 03:02:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:22.669 03:02:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:22.669 03:02:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:22.669 03:02:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:22.669 03:02:01 -- common/autotest_common.sh@10 -- # set +x 00:17:23.236 nvme0n1 00:17:23.236 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.236 03:02:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.236 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.236 03:02:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:23.236 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.236 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.236 03:02:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.236 03:02:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.236 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.236 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.236 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.236 03:02:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:23.236 03:02:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:23.236 03:02:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:23.236 03:02:02 -- host/auth.sh@44 -- # digest=sha512 00:17:23.236 03:02:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:23.236 03:02:02 -- host/auth.sh@44 -- # keyid=3 00:17:23.236 03:02:02 -- host/auth.sh@45 -- # key=DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:23.236 03:02:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:23.236 03:02:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:23.236 03:02:02 -- host/auth.sh@49 -- # echo DHHC-1:02:ZmI4NTY4YjE0YTBhMjc1Nzg4OTJmMDNkODg4ODhkNDkxMjYyNTQ4YmY4ODlmODUzGQpjEA==: 00:17:23.236 03:02:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:17:23.236 03:02:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:23.236 03:02:02 -- host/auth.sh@68 -- # digest=sha512 00:17:23.236 03:02:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:23.236 03:02:02 -- host/auth.sh@68 -- # keyid=3 00:17:23.236 03:02:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:23.236 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.236 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.236 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.236 03:02:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:23.236 03:02:02 -- nvmf/common.sh@717 -- # local ip 00:17:23.236 03:02:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:23.236 03:02:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:23.236 03:02:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:23.236 03:02:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:23.236 03:02:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:23.236 03:02:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:23.236 03:02:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:23.236 03:02:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:23.236 03:02:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:23.236 03:02:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:17:23.236 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.236 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.802 nvme0n1 00:17:23.802 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.802 03:02:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:23.802 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.802 03:02:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:23.802 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:23.802 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:23.802 03:02:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.802 03:02:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:23.802 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:23.802 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:24.061 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.061 03:02:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:17:24.061 03:02:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:24.061 03:02:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:24.061 03:02:02 -- host/auth.sh@44 -- # digest=sha512 00:17:24.061 03:02:02 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:24.061 03:02:02 -- host/auth.sh@44 -- # keyid=4 00:17:24.061 03:02:02 -- host/auth.sh@45 -- # key=DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:24.061 03:02:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:17:24.061 03:02:02 -- host/auth.sh@48 -- # echo ffdhe8192 00:17:24.061 03:02:02 -- host/auth.sh@49 -- # echo DHHC-1:03:NGQ3OWE0MzJmMjY4NTE3YWY1NmRiYTY3MGQyOGM0MWJmMDMxZWU1NTNjZjdlN2ZhNGJiNzQ4OGU5YWY3OTIxOIPOEJU=: 00:17:24.061 03:02:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:17:24.061 03:02:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:17:24.061 03:02:02 -- host/auth.sh@68 -- # digest=sha512 00:17:24.061 03:02:02 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:17:24.061 03:02:02 -- host/auth.sh@68 -- # keyid=4 00:17:24.061 03:02:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:24.061 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.061 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:24.061 03:02:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.061 03:02:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:17:24.061 03:02:02 -- nvmf/common.sh@717 -- # local ip 00:17:24.061 03:02:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:24.061 03:02:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:24.061 03:02:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.061 03:02:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.061 03:02:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:24.061 03:02:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.061 03:02:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:24.061 03:02:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:24.061 03:02:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:24.061 03:02:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:24.061 03:02:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.061 03:02:02 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 nvme0n1 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:17:24.639 03:02:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.639 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.639 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:24.639 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.639 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:24.639 03:02:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:17:24.639 03:02:03 -- host/auth.sh@44 -- # digest=sha256 00:17:24.639 03:02:03 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:24.639 03:02:03 -- host/auth.sh@44 -- # keyid=1 00:17:24.639 03:02:03 -- host/auth.sh@45 -- # key=DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:24.639 03:02:03 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:17:24.639 03:02:03 -- host/auth.sh@48 -- # echo ffdhe2048 00:17:24.639 03:02:03 -- host/auth.sh@49 -- # echo DHHC-1:00:NGQyOTQ3ZDEzYjE3NDZlYWVkMDk3Mjc1ODgwYmM1MTRjMWI3ZDJkZDk5NjliMjNjCyaMpg==: 00:17:24.639 03:02:03 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:24.639 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.639 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@119 -- # get_main_ns_ip 00:17:24.639 03:02:03 -- nvmf/common.sh@717 -- # local ip 00:17:24.639 03:02:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:24.639 03:02:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:24.639 03:02:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.639 03:02:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.639 03:02:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:24.639 03:02:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.639 03:02:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:24.639 03:02:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:24.639 03:02:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:24.639 03:02:03 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.639 03:02:03 -- common/autotest_common.sh@638 -- # local es=0 00:17:24.639 03:02:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.639 03:02:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:24.639 03:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.639 03:02:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:24.639 03:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.639 03:02:03 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:24.639 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.639 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 request: 00:17:24.639 { 00:17:24.639 "name": "nvme0", 00:17:24.639 "trtype": "tcp", 00:17:24.639 "traddr": "10.0.0.1", 00:17:24.639 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:24.639 "adrfam": "ipv4", 00:17:24.639 "trsvcid": "4420", 00:17:24.639 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:24.639 "method": "bdev_nvme_attach_controller", 00:17:24.639 "req_id": 1 00:17:24.639 } 00:17:24.639 Got JSON-RPC error response 00:17:24.639 response: 00:17:24.639 { 00:17:24.639 "code": -32602, 00:17:24.639 "message": "Invalid parameters" 00:17:24.639 } 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:24.639 03:02:03 -- common/autotest_common.sh@641 -- # es=1 00:17:24.639 03:02:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:24.639 03:02:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:24.639 03:02:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:24.639 03:02:03 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.639 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.639 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.639 03:02:03 -- host/auth.sh@121 -- # jq length 00:17:24.639 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.639 03:02:03 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:17:24.639 03:02:03 -- host/auth.sh@124 -- # get_main_ns_ip 00:17:24.639 03:02:03 -- nvmf/common.sh@717 -- # local ip 00:17:24.639 03:02:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:17:24.639 03:02:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:17:24.639 03:02:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:24.639 03:02:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:24.639 03:02:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:17:24.639 03:02:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:24.639 03:02:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:17:24.640 03:02:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:17:24.640 03:02:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:17:24.640 03:02:03 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.640 03:02:03 -- common/autotest_common.sh@638 -- # local es=0 00:17:24.640 03:02:03 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.640 03:02:03 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:24.640 03:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.640 03:02:03 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:24.640 03:02:03 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:24.640 03:02:03 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:24.640 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.640 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 request: 00:17:24.640 { 00:17:24.640 "name": "nvme0", 00:17:24.640 "trtype": "tcp", 00:17:24.640 "traddr": "10.0.0.1", 00:17:24.640 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:24.640 "adrfam": "ipv4", 00:17:24.640 "trsvcid": "4420", 00:17:24.640 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:24.640 "dhchap_key": "key2", 00:17:24.640 "method": "bdev_nvme_attach_controller", 00:17:24.640 "req_id": 1 00:17:24.640 } 00:17:24.640 Got JSON-RPC error response 00:17:24.640 response: 00:17:24.640 { 00:17:24.640 "code": -32602, 00:17:24.640 "message": "Invalid parameters" 00:17:24.640 } 00:17:24.640 03:02:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:24.640 03:02:03 -- common/autotest_common.sh@641 -- # es=1 00:17:24.640 03:02:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:24.640 03:02:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:24.640 03:02:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:24.640 03:02:03 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:17:24.640 03:02:03 -- host/auth.sh@127 -- # jq length 00:17:24.640 03:02:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:24.640 03:02:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.640 03:02:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:24.640 03:02:03 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:17:24.640 03:02:03 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:17:24.640 03:02:03 -- host/auth.sh@130 -- # cleanup 00:17:24.640 03:02:03 -- host/auth.sh@24 -- # nvmftestfini 00:17:24.640 03:02:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:24.640 03:02:03 -- nvmf/common.sh@117 -- # sync 00:17:24.640 03:02:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.640 03:02:03 -- nvmf/common.sh@120 -- # set +e 00:17:24.640 03:02:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.640 03:02:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.640 rmmod nvme_tcp 00:17:24.897 rmmod nvme_fabrics 00:17:24.897 03:02:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.897 03:02:03 -- nvmf/common.sh@124 -- # set -e 00:17:24.897 03:02:03 -- nvmf/common.sh@125 -- # return 0 00:17:24.897 03:02:03 -- nvmf/common.sh@478 -- # '[' -n 90429 ']' 00:17:24.897 03:02:03 -- nvmf/common.sh@479 -- # killprocess 90429 00:17:24.897 03:02:03 -- common/autotest_common.sh@936 -- # '[' -z 90429 ']' 00:17:24.897 03:02:03 -- common/autotest_common.sh@940 -- # kill -0 90429 00:17:24.897 03:02:03 -- common/autotest_common.sh@941 -- # uname 00:17:24.897 03:02:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:24.897 03:02:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90429 00:17:24.897 killing process with pid 90429 00:17:24.897 03:02:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:24.897 03:02:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:24.897 03:02:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90429' 00:17:24.897 03:02:03 -- common/autotest_common.sh@955 -- # kill 90429 00:17:24.897 03:02:03 -- common/autotest_common.sh@960 -- # wait 90429 00:17:24.897 03:02:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:24.897 03:02:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:24.897 03:02:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:24.897 03:02:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.897 03:02:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.897 03:02:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.897 03:02:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.897 03:02:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.897 03:02:04 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:24.897 03:02:04 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:24.897 03:02:04 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:24.897 03:02:04 -- host/auth.sh@27 -- # clean_kernel_target 00:17:24.897 03:02:04 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:24.897 03:02:04 -- nvmf/common.sh@675 -- # echo 0 00:17:24.897 03:02:04 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.897 03:02:04 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:24.897 03:02:04 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:24.897 03:02:04 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:24.897 03:02:04 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:17:24.897 03:02:04 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:17:25.155 03:02:04 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:25.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:25.722 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:25.722 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:25.980 03:02:04 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Qd3 /tmp/spdk.key-null.svp /tmp/spdk.key-sha256.tOK /tmp/spdk.key-sha384.Aqx /tmp/spdk.key-sha512.FRR /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:25.980 03:02:04 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:26.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:26.240 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:26.240 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:26.240 ************************************ 00:17:26.240 END TEST nvmf_auth 00:17:26.240 ************************************ 00:17:26.240 00:17:26.240 real 0m39.514s 00:17:26.240 user 0m35.124s 00:17:26.240 sys 0m3.633s 00:17:26.240 03:02:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.240 03:02:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.240 03:02:05 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:17:26.240 03:02:05 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:26.240 03:02:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:26.240 03:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.240 03:02:05 -- common/autotest_common.sh@10 -- # set +x 00:17:26.240 ************************************ 00:17:26.240 START TEST nvmf_digest 00:17:26.240 ************************************ 00:17:26.240 03:02:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:26.500 * Looking for test storage... 00:17:26.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:26.500 03:02:05 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.500 03:02:05 -- nvmf/common.sh@7 -- # uname -s 00:17:26.500 03:02:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.500 03:02:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.500 03:02:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.500 03:02:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.500 03:02:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.500 03:02:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.500 03:02:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.500 03:02:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.500 03:02:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.500 03:02:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.500 03:02:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:17:26.500 03:02:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:17:26.500 03:02:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.500 03:02:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.500 03:02:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:26.500 03:02:05 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.500 03:02:05 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.500 03:02:05 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.500 03:02:05 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.500 03:02:05 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.500 03:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.500 03:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.500 03:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.500 03:02:05 -- paths/export.sh@5 -- # export PATH 00:17:26.500 03:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.500 03:02:05 -- nvmf/common.sh@47 -- # : 0 00:17:26.500 03:02:05 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.500 03:02:05 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.500 03:02:05 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.500 03:02:05 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.500 03:02:05 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.500 03:02:05 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.500 03:02:05 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.500 03:02:05 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.500 03:02:05 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:26.500 03:02:05 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:26.500 03:02:05 -- host/digest.sh@16 -- # runtime=2 00:17:26.500 03:02:05 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:26.500 03:02:05 -- host/digest.sh@138 -- # nvmftestinit 00:17:26.500 03:02:05 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:26.500 03:02:05 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.500 03:02:05 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:26.500 03:02:05 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:26.500 03:02:05 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:26.500 03:02:05 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.500 03:02:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.500 03:02:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.500 03:02:05 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:26.500 03:02:05 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:26.500 03:02:05 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:26.500 03:02:05 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:26.500 03:02:05 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:26.501 03:02:05 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:26.501 03:02:05 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:26.501 03:02:05 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:26.501 03:02:05 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:26.501 03:02:05 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:26.501 03:02:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:26.501 03:02:05 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:26.501 03:02:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:26.501 03:02:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:26.501 03:02:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:26.501 03:02:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:26.501 03:02:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:26.501 03:02:05 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:26.501 03:02:05 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:26.501 03:02:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:26.501 Cannot find device "nvmf_tgt_br" 00:17:26.501 03:02:05 -- nvmf/common.sh@155 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:26.501 Cannot find device "nvmf_tgt_br2" 00:17:26.501 03:02:05 -- nvmf/common.sh@156 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:26.501 03:02:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:26.501 Cannot find device "nvmf_tgt_br" 00:17:26.501 03:02:05 -- nvmf/common.sh@158 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:26.501 Cannot find device "nvmf_tgt_br2" 00:17:26.501 03:02:05 -- nvmf/common.sh@159 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:26.501 03:02:05 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:26.501 03:02:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:26.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.501 03:02:05 -- nvmf/common.sh@162 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:26.501 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:26.501 03:02:05 -- nvmf/common.sh@163 -- # true 00:17:26.501 03:02:05 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:26.501 03:02:05 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:26.501 03:02:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:26.501 03:02:05 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:26.760 03:02:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:26.760 03:02:05 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:26.760 03:02:05 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:26.760 03:02:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:26.760 03:02:05 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:26.760 03:02:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:26.760 03:02:05 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:26.760 03:02:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:26.760 03:02:05 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:26.760 03:02:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:26.760 03:02:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:26.760 03:02:05 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:26.760 03:02:05 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:26.760 03:02:05 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:26.760 03:02:05 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:26.760 03:02:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:26.760 03:02:05 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:26.760 03:02:05 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:26.760 03:02:05 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:26.760 03:02:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:26.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:26.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:17:26.760 00:17:26.760 --- 10.0.0.2 ping statistics --- 00:17:26.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.760 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:17:26.760 03:02:05 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:26.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:26.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:17:26.760 00:17:26.760 --- 10.0.0.3 ping statistics --- 00:17:26.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.760 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:17:26.760 03:02:05 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:26.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:26.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:17:26.760 00:17:26.760 --- 10.0.0.1 ping statistics --- 00:17:26.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:26.760 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:17:26.760 03:02:05 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:26.760 03:02:05 -- nvmf/common.sh@422 -- # return 0 00:17:26.760 03:02:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:26.760 03:02:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:26.760 03:02:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:26.760 03:02:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:26.760 03:02:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:26.760 03:02:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:26.760 03:02:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:26.760 03:02:05 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:26.760 03:02:05 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:26.760 03:02:05 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:26.760 03:02:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.760 03:02:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.760 03:02:05 -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 ************************************ 00:17:27.019 START TEST nvmf_digest_clean 00:17:27.019 ************************************ 00:17:27.019 03:02:05 -- common/autotest_common.sh@1111 -- # run_digest 00:17:27.019 03:02:05 -- host/digest.sh@120 -- # local dsa_initiator 00:17:27.019 03:02:05 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:27.019 03:02:05 -- host/digest.sh@121 -- # dsa_initiator=false 00:17:27.019 03:02:05 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:27.019 03:02:05 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:27.019 03:02:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:27.019 03:02:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:27.019 03:02:05 -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 03:02:05 -- nvmf/common.sh@470 -- # nvmfpid=92039 00:17:27.019 03:02:05 -- nvmf/common.sh@471 -- # waitforlisten 92039 00:17:27.019 03:02:05 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:27.019 03:02:05 -- common/autotest_common.sh@817 -- # '[' -z 92039 ']' 00:17:27.019 03:02:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.019 03:02:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.019 03:02:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.019 03:02:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.019 03:02:05 -- common/autotest_common.sh@10 -- # set +x 00:17:27.019 [2024-04-23 03:02:06.015468] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:27.019 [2024-04-23 03:02:06.015764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:27.019 [2024-04-23 03:02:06.138846] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:27.019 [2024-04-23 03:02:06.159289] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.278 [2024-04-23 03:02:06.200698] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:27.278 [2024-04-23 03:02:06.200763] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:27.278 [2024-04-23 03:02:06.200777] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:27.278 [2024-04-23 03:02:06.200787] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:27.278 [2024-04-23 03:02:06.200797] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:27.278 [2024-04-23 03:02:06.200831] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.278 03:02:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.278 03:02:06 -- common/autotest_common.sh@850 -- # return 0 00:17:27.278 03:02:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:27.278 03:02:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:27.278 03:02:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.278 03:02:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:27.278 03:02:06 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:27.278 03:02:06 -- host/digest.sh@126 -- # common_target_config 00:17:27.278 03:02:06 -- host/digest.sh@43 -- # rpc_cmd 00:17:27.278 03:02:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.278 03:02:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.278 null0 00:17:27.278 [2024-04-23 03:02:06.357249] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:27.278 [2024-04-23 03:02:06.381345] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:27.278 03:02:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.278 03:02:06 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:27.278 03:02:06 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:27.278 03:02:06 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:27.278 03:02:06 -- host/digest.sh@80 -- # rw=randread 00:17:27.278 03:02:06 -- host/digest.sh@80 -- # bs=4096 00:17:27.278 03:02:06 -- host/digest.sh@80 -- # qd=128 00:17:27.278 03:02:06 -- host/digest.sh@80 -- # scan_dsa=false 00:17:27.278 03:02:06 -- host/digest.sh@83 -- # bperfpid=92064 00:17:27.278 03:02:06 -- host/digest.sh@84 -- # waitforlisten 92064 /var/tmp/bperf.sock 00:17:27.278 03:02:06 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:27.278 03:02:06 -- common/autotest_common.sh@817 -- # '[' -z 92064 ']' 00:17:27.278 03:02:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:27.278 03:02:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:27.278 03:02:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:27.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:27.278 03:02:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:27.279 03:02:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.536 [2024-04-23 03:02:06.440721] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:27.536 [2024-04-23 03:02:06.441014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92064 ] 00:17:27.536 [2024-04-23 03:02:06.563501] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:27.536 [2024-04-23 03:02:06.584066] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.536 [2024-04-23 03:02:06.625103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:27.536 03:02:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.536 03:02:06 -- common/autotest_common.sh@850 -- # return 0 00:17:27.536 03:02:06 -- host/digest.sh@86 -- # false 00:17:27.536 03:02:06 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:27.536 03:02:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:28.103 03:02:06 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.103 03:02:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:28.103 nvme0n1 00:17:28.361 03:02:07 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:28.361 03:02:07 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:28.361 Running I/O for 2 seconds... 00:17:30.288 00:17:30.288 Latency(us) 00:17:30.288 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.288 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:30.288 nvme0n1 : 2.01 16689.78 65.19 0.00 0.00 7665.15 6613.18 19422.49 00:17:30.288 =================================================================================================================== 00:17:30.288 Total : 16689.78 65.19 0.00 0.00 7665.15 6613.18 19422.49 00:17:30.288 0 00:17:30.288 03:02:09 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:30.288 03:02:09 -- host/digest.sh@93 -- # get_accel_stats 00:17:30.288 03:02:09 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:30.288 03:02:09 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:30.288 | select(.opcode=="crc32c") 00:17:30.288 | "\(.module_name) \(.executed)"' 00:17:30.288 03:02:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:30.546 03:02:09 -- host/digest.sh@94 -- # false 00:17:30.546 03:02:09 -- host/digest.sh@94 -- # exp_module=software 00:17:30.546 03:02:09 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:30.546 03:02:09 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:30.546 03:02:09 -- host/digest.sh@98 -- # killprocess 92064 00:17:30.546 03:02:09 -- common/autotest_common.sh@936 -- # '[' -z 92064 ']' 00:17:30.546 03:02:09 -- common/autotest_common.sh@940 -- # kill -0 92064 00:17:30.546 03:02:09 -- common/autotest_common.sh@941 -- # uname 00:17:30.546 03:02:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:30.546 03:02:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92064 00:17:30.546 killing process with pid 92064 00:17:30.546 Received shutdown signal, test time was about 2.000000 seconds 00:17:30.546 00:17:30.546 Latency(us) 00:17:30.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:30.546 =================================================================================================================== 00:17:30.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:30.546 03:02:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:30.546 03:02:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:30.546 03:02:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92064' 00:17:30.546 03:02:09 -- common/autotest_common.sh@955 -- # kill 92064 00:17:30.546 03:02:09 -- common/autotest_common.sh@960 -- # wait 92064 00:17:30.805 03:02:09 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:30.805 03:02:09 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:30.805 03:02:09 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:30.805 03:02:09 -- host/digest.sh@80 -- # rw=randread 00:17:30.805 03:02:09 -- host/digest.sh@80 -- # bs=131072 00:17:30.805 03:02:09 -- host/digest.sh@80 -- # qd=16 00:17:30.805 03:02:09 -- host/digest.sh@80 -- # scan_dsa=false 00:17:30.805 03:02:09 -- host/digest.sh@83 -- # bperfpid=92111 00:17:30.805 03:02:09 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:30.805 03:02:09 -- host/digest.sh@84 -- # waitforlisten 92111 /var/tmp/bperf.sock 00:17:30.805 03:02:09 -- common/autotest_common.sh@817 -- # '[' -z 92111 ']' 00:17:30.805 03:02:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:30.805 03:02:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.805 03:02:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:30.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:30.805 03:02:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.805 03:02:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.805 [2024-04-23 03:02:09.876749] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:30.805 [2024-04-23 03:02:09.877105] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefixI/O size of 131072 is greater than zero copy threshold (65536). 00:17:30.805 Zero copy mechanism will not be used. 00:17:30.805 =spdk_pid92111 ] 00:17:31.064 [2024-04-23 03:02:09.999575] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:31.064 [2024-04-23 03:02:10.019013] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.064 [2024-04-23 03:02:10.058933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.064 03:02:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.064 03:02:10 -- common/autotest_common.sh@850 -- # return 0 00:17:31.064 03:02:10 -- host/digest.sh@86 -- # false 00:17:31.064 03:02:10 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:31.064 03:02:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:31.324 03:02:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.324 03:02:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:31.585 nvme0n1 00:17:31.585 03:02:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:31.585 03:02:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:31.846 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:31.846 Zero copy mechanism will not be used. 00:17:31.846 Running I/O for 2 seconds... 00:17:33.767 00:17:33.767 Latency(us) 00:17:33.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:33.767 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:33.767 nvme0n1 : 2.00 6638.64 829.83 0.00 0.00 2406.57 2070.34 8281.37 00:17:33.767 =================================================================================================================== 00:17:33.767 Total : 6638.64 829.83 0.00 0.00 2406.57 2070.34 8281.37 00:17:33.767 0 00:17:33.767 03:02:12 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:33.767 03:02:12 -- host/digest.sh@93 -- # get_accel_stats 00:17:33.767 03:02:12 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:33.767 03:02:12 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:33.767 | select(.opcode=="crc32c") 00:17:33.767 | "\(.module_name) \(.executed)"' 00:17:33.767 03:02:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:34.027 03:02:13 -- host/digest.sh@94 -- # false 00:17:34.027 03:02:13 -- host/digest.sh@94 -- # exp_module=software 00:17:34.027 03:02:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:34.027 03:02:13 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:34.027 03:02:13 -- host/digest.sh@98 -- # killprocess 92111 00:17:34.027 03:02:13 -- common/autotest_common.sh@936 -- # '[' -z 92111 ']' 00:17:34.027 03:02:13 -- common/autotest_common.sh@940 -- # kill -0 92111 00:17:34.027 03:02:13 -- common/autotest_common.sh@941 -- # uname 00:17:34.027 03:02:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.027 03:02:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92111 00:17:34.027 killing process with pid 92111 00:17:34.027 Received shutdown signal, test time was about 2.000000 seconds 00:17:34.027 00:17:34.027 Latency(us) 00:17:34.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.027 =================================================================================================================== 00:17:34.027 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.027 03:02:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:34.027 03:02:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:34.027 03:02:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92111' 00:17:34.027 03:02:13 -- common/autotest_common.sh@955 -- # kill 92111 00:17:34.027 03:02:13 -- common/autotest_common.sh@960 -- # wait 92111 00:17:34.284 03:02:13 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:34.284 03:02:13 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:34.284 03:02:13 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:34.284 03:02:13 -- host/digest.sh@80 -- # rw=randwrite 00:17:34.284 03:02:13 -- host/digest.sh@80 -- # bs=4096 00:17:34.284 03:02:13 -- host/digest.sh@80 -- # qd=128 00:17:34.284 03:02:13 -- host/digest.sh@80 -- # scan_dsa=false 00:17:34.284 03:02:13 -- host/digest.sh@83 -- # bperfpid=92164 00:17:34.284 03:02:13 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:34.284 03:02:13 -- host/digest.sh@84 -- # waitforlisten 92164 /var/tmp/bperf.sock 00:17:34.284 03:02:13 -- common/autotest_common.sh@817 -- # '[' -z 92164 ']' 00:17:34.284 03:02:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:34.284 03:02:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:34.284 03:02:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:34.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:34.284 03:02:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:34.284 03:02:13 -- common/autotest_common.sh@10 -- # set +x 00:17:34.284 [2024-04-23 03:02:13.318508] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:34.284 [2024-04-23 03:02:13.318811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92164 ] 00:17:34.284 [2024-04-23 03:02:13.439150] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:34.542 [2024-04-23 03:02:13.455108] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.542 [2024-04-23 03:02:13.490119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.542 03:02:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:34.542 03:02:13 -- common/autotest_common.sh@850 -- # return 0 00:17:34.542 03:02:13 -- host/digest.sh@86 -- # false 00:17:34.542 03:02:13 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:34.542 03:02:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:34.801 03:02:13 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:34.801 03:02:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:35.367 nvme0n1 00:17:35.368 03:02:14 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:35.368 03:02:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:35.368 Running I/O for 2 seconds... 00:17:37.267 00:17:37.267 Latency(us) 00:17:37.267 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.267 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:37.267 nvme0n1 : 2.01 13721.74 53.60 0.00 0.00 9320.60 8221.79 18230.92 00:17:37.267 =================================================================================================================== 00:17:37.267 Total : 13721.74 53.60 0.00 0.00 9320.60 8221.79 18230.92 00:17:37.267 0 00:17:37.267 03:02:16 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:37.267 03:02:16 -- host/digest.sh@93 -- # get_accel_stats 00:17:37.267 03:02:16 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:37.267 03:02:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:37.268 03:02:16 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:37.268 | select(.opcode=="crc32c") 00:17:37.268 | "\(.module_name) \(.executed)"' 00:17:37.844 03:02:16 -- host/digest.sh@94 -- # false 00:17:37.844 03:02:16 -- host/digest.sh@94 -- # exp_module=software 00:17:37.844 03:02:16 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:37.844 03:02:16 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:37.844 03:02:16 -- host/digest.sh@98 -- # killprocess 92164 00:17:37.844 03:02:16 -- common/autotest_common.sh@936 -- # '[' -z 92164 ']' 00:17:37.844 03:02:16 -- common/autotest_common.sh@940 -- # kill -0 92164 00:17:37.844 03:02:16 -- common/autotest_common.sh@941 -- # uname 00:17:37.844 03:02:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:37.844 03:02:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92164 00:17:37.844 killing process with pid 92164 00:17:37.844 Received shutdown signal, test time was about 2.000000 seconds 00:17:37.844 00:17:37.844 Latency(us) 00:17:37.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:37.844 =================================================================================================================== 00:17:37.844 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:37.844 03:02:16 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:37.844 03:02:16 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:37.844 03:02:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92164' 00:17:37.844 03:02:16 -- common/autotest_common.sh@955 -- # kill 92164 00:17:37.844 03:02:16 -- common/autotest_common.sh@960 -- # wait 92164 00:17:37.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:37.844 03:02:16 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:37.844 03:02:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:37.844 03:02:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:37.844 03:02:16 -- host/digest.sh@80 -- # rw=randwrite 00:17:37.844 03:02:16 -- host/digest.sh@80 -- # bs=131072 00:17:37.844 03:02:16 -- host/digest.sh@80 -- # qd=16 00:17:37.844 03:02:16 -- host/digest.sh@80 -- # scan_dsa=false 00:17:37.844 03:02:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:37.844 03:02:16 -- host/digest.sh@83 -- # bperfpid=92211 00:17:37.844 03:02:16 -- host/digest.sh@84 -- # waitforlisten 92211 /var/tmp/bperf.sock 00:17:37.844 03:02:16 -- common/autotest_common.sh@817 -- # '[' -z 92211 ']' 00:17:37.844 03:02:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:37.844 03:02:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.844 03:02:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:37.844 03:02:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.844 03:02:16 -- common/autotest_common.sh@10 -- # set +x 00:17:37.844 [2024-04-23 03:02:16.919966] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:37.844 [2024-04-23 03:02:16.920235] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92211 ] 00:17:37.844 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:37.844 Zero copy mechanism will not be used. 00:17:38.104 [2024-04-23 03:02:17.039186] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:38.104 [2024-04-23 03:02:17.055657] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:38.104 [2024-04-23 03:02:17.093266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.104 03:02:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.104 03:02:17 -- common/autotest_common.sh@850 -- # return 0 00:17:38.104 03:02:17 -- host/digest.sh@86 -- # false 00:17:38.104 03:02:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:38.104 03:02:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:38.362 03:02:17 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.362 03:02:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:38.929 nvme0n1 00:17:38.929 03:02:17 -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:38.929 03:02:17 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:38.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:38.929 Zero copy mechanism will not be used. 00:17:38.929 Running I/O for 2 seconds... 00:17:40.833 00:17:40.833 Latency(us) 00:17:40.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:40.833 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:40.833 nvme0n1 : 2.00 5420.51 677.56 0.00 0.00 2944.98 1578.82 4230.05 00:17:40.833 =================================================================================================================== 00:17:40.833 Total : 5420.51 677.56 0.00 0.00 2944.98 1578.82 4230.05 00:17:40.833 0 00:17:40.833 03:02:19 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:40.833 03:02:19 -- host/digest.sh@93 -- # get_accel_stats 00:17:40.833 03:02:19 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:40.833 03:02:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:40.833 03:02:19 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:40.833 | select(.opcode=="crc32c") 00:17:40.833 | "\(.module_name) \(.executed)"' 00:17:41.092 03:02:20 -- host/digest.sh@94 -- # false 00:17:41.092 03:02:20 -- host/digest.sh@94 -- # exp_module=software 00:17:41.092 03:02:20 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:41.092 03:02:20 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:41.092 03:02:20 -- host/digest.sh@98 -- # killprocess 92211 00:17:41.092 03:02:20 -- common/autotest_common.sh@936 -- # '[' -z 92211 ']' 00:17:41.092 03:02:20 -- common/autotest_common.sh@940 -- # kill -0 92211 00:17:41.092 03:02:20 -- common/autotest_common.sh@941 -- # uname 00:17:41.092 03:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.092 03:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92211 00:17:41.352 killing process with pid 92211 00:17:41.352 Received shutdown signal, test time was about 2.000000 seconds 00:17:41.352 00:17:41.352 Latency(us) 00:17:41.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.352 =================================================================================================================== 00:17:41.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.352 03:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:41.352 03:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:41.352 03:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92211' 00:17:41.352 03:02:20 -- common/autotest_common.sh@955 -- # kill 92211 00:17:41.352 03:02:20 -- common/autotest_common.sh@960 -- # wait 92211 00:17:41.352 03:02:20 -- host/digest.sh@132 -- # killprocess 92039 00:17:41.352 03:02:20 -- common/autotest_common.sh@936 -- # '[' -z 92039 ']' 00:17:41.352 03:02:20 -- common/autotest_common.sh@940 -- # kill -0 92039 00:17:41.352 03:02:20 -- common/autotest_common.sh@941 -- # uname 00:17:41.352 03:02:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.352 03:02:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92039 00:17:41.352 killing process with pid 92039 00:17:41.352 03:02:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:41.352 03:02:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:41.352 03:02:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92039' 00:17:41.352 03:02:20 -- common/autotest_common.sh@955 -- # kill 92039 00:17:41.352 03:02:20 -- common/autotest_common.sh@960 -- # wait 92039 00:17:41.610 ************************************ 00:17:41.610 END TEST nvmf_digest_clean 00:17:41.610 ************************************ 00:17:41.610 00:17:41.610 real 0m14.664s 00:17:41.610 user 0m28.414s 00:17:41.610 sys 0m4.286s 00:17:41.610 03:02:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:41.610 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.610 03:02:20 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:41.610 03:02:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:41.610 03:02:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:41.610 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.610 ************************************ 00:17:41.610 START TEST nvmf_digest_error 00:17:41.610 ************************************ 00:17:41.610 03:02:20 -- common/autotest_common.sh@1111 -- # run_digest_error 00:17:41.610 03:02:20 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:41.610 03:02:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:41.610 03:02:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:41.610 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.610 03:02:20 -- nvmf/common.sh@470 -- # nvmfpid=92291 00:17:41.610 03:02:20 -- nvmf/common.sh@471 -- # waitforlisten 92291 00:17:41.610 03:02:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:41.610 03:02:20 -- common/autotest_common.sh@817 -- # '[' -z 92291 ']' 00:17:41.610 03:02:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:41.610 03:02:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:41.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:41.610 03:02:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:41.610 03:02:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:41.610 03:02:20 -- common/autotest_common.sh@10 -- # set +x 00:17:41.869 [2024-04-23 03:02:20.805493] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:41.869 [2024-04-23 03:02:20.805617] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.869 [2024-04-23 03:02:20.931777] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:41.869 [2024-04-23 03:02:20.946795] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.869 [2024-04-23 03:02:20.988283] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.869 [2024-04-23 03:02:20.988334] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.869 [2024-04-23 03:02:20.988346] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.869 [2024-04-23 03:02:20.988354] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.869 [2024-04-23 03:02:20.988361] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.869 [2024-04-23 03:02:20.988395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.128 03:02:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.128 03:02:21 -- common/autotest_common.sh@850 -- # return 0 00:17:42.128 03:02:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:42.128 03:02:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:42.128 03:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.128 03:02:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.128 03:02:21 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:42.128 03:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.128 03:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.128 [2024-04-23 03:02:21.092890] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:42.128 03:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.128 03:02:21 -- host/digest.sh@105 -- # common_target_config 00:17:42.128 03:02:21 -- host/digest.sh@43 -- # rpc_cmd 00:17:42.128 03:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.128 03:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.128 null0 00:17:42.128 [2024-04-23 03:02:21.169428] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:42.128 [2024-04-23 03:02:21.193559] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:42.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:42.128 03:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.128 03:02:21 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:42.128 03:02:21 -- host/digest.sh@54 -- # local rw bs qd 00:17:42.128 03:02:21 -- host/digest.sh@56 -- # rw=randread 00:17:42.128 03:02:21 -- host/digest.sh@56 -- # bs=4096 00:17:42.129 03:02:21 -- host/digest.sh@56 -- # qd=128 00:17:42.129 03:02:21 -- host/digest.sh@58 -- # bperfpid=92315 00:17:42.129 03:02:21 -- host/digest.sh@60 -- # waitforlisten 92315 /var/tmp/bperf.sock 00:17:42.129 03:02:21 -- common/autotest_common.sh@817 -- # '[' -z 92315 ']' 00:17:42.129 03:02:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:42.129 03:02:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.129 03:02:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:42.129 03:02:21 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:42.129 03:02:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.129 03:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.129 [2024-04-23 03:02:21.257832] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:42.129 [2024-04-23 03:02:21.258079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92315 ] 00:17:42.387 [2024-04-23 03:02:21.379991] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:42.387 [2024-04-23 03:02:21.399792] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.387 [2024-04-23 03:02:21.438839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.387 03:02:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.387 03:02:21 -- common/autotest_common.sh@850 -- # return 0 00:17:42.387 03:02:21 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.387 03:02:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:42.646 03:02:21 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:42.646 03:02:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:42.646 03:02:21 -- common/autotest_common.sh@10 -- # set +x 00:17:42.904 03:02:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:42.904 03:02:21 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:42.904 03:02:21 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:43.163 nvme0n1 00:17:43.163 03:02:22 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:43.163 03:02:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.163 03:02:22 -- common/autotest_common.sh@10 -- # set +x 00:17:43.163 03:02:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.163 03:02:22 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:43.163 03:02:22 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:43.422 Running I/O for 2 seconds... 00:17:43.422 [2024-04-23 03:02:22.369208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.369457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.369597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.391041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.391082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.391096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.410955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.411040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.411070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.431074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.431159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.450485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.450544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.450590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.470123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.470171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.470202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.489717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.489771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.489784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.509082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.509120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.509164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.528501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.528538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.528552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.548465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.548505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.548520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.422 [2024-04-23 03:02:22.567914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.422 [2024-04-23 03:02:22.567951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.422 [2024-04-23 03:02:22.567965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.587247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.587302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.587333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.606859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.606897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.606911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.626927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.626979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.626993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.646546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.646584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.646598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.666277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.666342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.666373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.686006] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.686044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.686074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.706310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.706377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.706391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.726160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.726207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.726239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.745873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.745925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:19748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.745955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.765294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.765331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.765361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.784740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.784807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.784836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.682 [2024-04-23 03:02:22.804350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.682 [2024-04-23 03:02:22.804387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.682 [2024-04-23 03:02:22.804417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.683 [2024-04-23 03:02:22.823990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.683 [2024-04-23 03:02:22.824063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.683 [2024-04-23 03:02:22.824078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.846568] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.846675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.846690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.871282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.871737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:11945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.871759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.898404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.898469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.898493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.919557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.919606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.919624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.940964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.941014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.941032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.961217] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.961267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.961285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.942 [2024-04-23 03:02:22.981864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.942 [2024-04-23 03:02:22.981915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.942 [2024-04-23 03:02:22.981934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.943 [2024-04-23 03:02:23.003370] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.943 [2024-04-23 03:02:23.003440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.943 [2024-04-23 03:02:23.003459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.943 [2024-04-23 03:02:23.024796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.943 [2024-04-23 03:02:23.024847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.943 [2024-04-23 03:02:23.024865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.943 [2024-04-23 03:02:23.046494] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.943 [2024-04-23 03:02:23.046550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.943 [2024-04-23 03:02:23.046570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.943 [2024-04-23 03:02:23.067622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.943 [2024-04-23 03:02:23.067664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.943 [2024-04-23 03:02:23.067679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:43.943 [2024-04-23 03:02:23.087842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:43.943 [2024-04-23 03:02:23.087883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:43.943 [2024-04-23 03:02:23.087897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.108387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.108424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.108438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.128213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.128278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:23442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.128294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.148572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.148640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.148685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.168228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.168266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.168279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.187643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.187684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.187700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.206863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.206903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.206917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.226499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.226539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.226553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.246110] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.246158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.246189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.266333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.266399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.266415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.286291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.204 [2024-04-23 03:02:23.286332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.204 [2024-04-23 03:02:23.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.204 [2024-04-23 03:02:23.305914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.205 [2024-04-23 03:02:23.305956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.205 [2024-04-23 03:02:23.305970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.205 [2024-04-23 03:02:23.324933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.205 [2024-04-23 03:02:23.324974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.205 [2024-04-23 03:02:23.324989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.205 [2024-04-23 03:02:23.345206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.205 [2024-04-23 03:02:23.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.205 [2024-04-23 03:02:23.345302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.364180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.364221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.364248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.383623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.383664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.383678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.402859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.402914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.402929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.422342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.422391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.422406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.441393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.441447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.441461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.461188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.461255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.461271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.481065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.481151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.481166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.500949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.500988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.501001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.520707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.520746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.520760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.540127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.540176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.540208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.559583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.559623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.559637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.579103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.579182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.579199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.463 [2024-04-23 03:02:23.598925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.463 [2024-04-23 03:02:23.598964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.463 [2024-04-23 03:02:23.598978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.464 [2024-04-23 03:02:23.618975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.464 [2024-04-23 03:02:23.619016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:14219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.464 [2024-04-23 03:02:23.619030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.647937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.647990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.648019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.667868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.667906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.667919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.687885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.687923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.687954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.707554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.707595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.707610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.727588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.727629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.727644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.746726] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.746764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.746780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.766062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.766132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.766177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.786095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.786229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.786246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.806104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.806202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.806218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.826128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.826177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.826192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.845732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.845771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.845800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.723 [2024-04-23 03:02:23.865646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.723 [2024-04-23 03:02:23.865684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.723 [2024-04-23 03:02:23.865698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.885330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.885370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:7383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.885386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.905059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.905098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.905112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.925070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.925110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.925124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.944684] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.944723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.944736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.964761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.964815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.964844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:23.984660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:23.984716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:23.984747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:24.004022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.982 [2024-04-23 03:02:24.004066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.982 [2024-04-23 03:02:24.004081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.982 [2024-04-23 03:02:24.023930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.023972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.023986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.983 [2024-04-23 03:02:24.044641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.044740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.044754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.983 [2024-04-23 03:02:24.064875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.064914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.064930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.983 [2024-04-23 03:02:24.085231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.085332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:21623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.085348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.983 [2024-04-23 03:02:24.104999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.105041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.105056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:44.983 [2024-04-23 03:02:24.125207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:44.983 [2024-04-23 03:02:24.125305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:44.983 [2024-04-23 03:02:24.125321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.146123] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.146191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.166453] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.166506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:12704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.166536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.187348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.187385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.187423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.207219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.207289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.207304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.226785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.226823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.226837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.246769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.246809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.246824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.266729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.266782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.266796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.286673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.286728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.286741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.306822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.306886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.306918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.326813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.326869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.326884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 [2024-04-23 03:02:24.346377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x769a70) 00:17:45.242 [2024-04-23 03:02:24.346428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:45.242 [2024-04-23 03:02:24.346459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:45.242 00:17:45.242 Latency(us) 00:17:45.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.242 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:45.242 nvme0n1 : 2.01 12587.76 49.17 0.00 0.00 10160.85 8936.73 38606.66 00:17:45.242 =================================================================================================================== 00:17:45.242 Total : 12587.76 49.17 0.00 0.00 10160.85 8936.73 38606.66 00:17:45.242 0 00:17:45.242 03:02:24 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:45.242 03:02:24 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:45.242 03:02:24 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:45.242 | .driver_specific 00:17:45.242 | .nvme_error 00:17:45.242 | .status_code 00:17:45.242 | .command_transient_transport_error' 00:17:45.242 03:02:24 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:45.501 03:02:24 -- host/digest.sh@71 -- # (( 99 > 0 )) 00:17:45.501 03:02:24 -- host/digest.sh@73 -- # killprocess 92315 00:17:45.501 03:02:24 -- common/autotest_common.sh@936 -- # '[' -z 92315 ']' 00:17:45.501 03:02:24 -- common/autotest_common.sh@940 -- # kill -0 92315 00:17:45.501 03:02:24 -- common/autotest_common.sh@941 -- # uname 00:17:45.501 03:02:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:45.501 03:02:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92315 00:17:45.760 killing process with pid 92315 00:17:45.760 Received shutdown signal, test time was about 2.000000 seconds 00:17:45.760 00:17:45.760 Latency(us) 00:17:45.760 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.760 =================================================================================================================== 00:17:45.760 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.760 03:02:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:45.760 03:02:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:45.760 03:02:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92315' 00:17:45.760 03:02:24 -- common/autotest_common.sh@955 -- # kill 92315 00:17:45.760 03:02:24 -- common/autotest_common.sh@960 -- # wait 92315 00:17:45.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:45.760 03:02:24 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:45.760 03:02:24 -- host/digest.sh@54 -- # local rw bs qd 00:17:45.760 03:02:24 -- host/digest.sh@56 -- # rw=randread 00:17:45.760 03:02:24 -- host/digest.sh@56 -- # bs=131072 00:17:45.760 03:02:24 -- host/digest.sh@56 -- # qd=16 00:17:45.760 03:02:24 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:45.760 03:02:24 -- host/digest.sh@58 -- # bperfpid=92368 00:17:45.760 03:02:24 -- host/digest.sh@60 -- # waitforlisten 92368 /var/tmp/bperf.sock 00:17:45.760 03:02:24 -- common/autotest_common.sh@817 -- # '[' -z 92368 ']' 00:17:45.760 03:02:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:45.760 03:02:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:45.760 03:02:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:45.760 03:02:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:45.760 03:02:24 -- common/autotest_common.sh@10 -- # set +x 00:17:45.760 [2024-04-23 03:02:24.870080] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:45.760 [2024-04-23 03:02:24.870400] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92368 ] 00:17:45.760 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:45.760 Zero copy mechanism will not be used. 00:17:46.019 [2024-04-23 03:02:24.990588] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:46.019 [2024-04-23 03:02:25.008080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.019 [2024-04-23 03:02:25.046172] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.019 03:02:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:46.019 03:02:25 -- common/autotest_common.sh@850 -- # return 0 00:17:46.019 03:02:25 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.019 03:02:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:46.277 03:02:25 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:46.277 03:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.277 03:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:46.277 03:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.277 03:02:25 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.277 03:02:25 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:46.536 nvme0n1 00:17:46.536 03:02:25 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:46.536 03:02:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:46.536 03:02:25 -- common/autotest_common.sh@10 -- # set +x 00:17:46.536 03:02:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:46.536 03:02:25 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:46.536 03:02:25 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:46.795 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:46.795 Zero copy mechanism will not be used. 00:17:46.795 Running I/O for 2 seconds... 00:17:46.795 [2024-04-23 03:02:25.811349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.795 [2024-04-23 03:02:25.811395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.811434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.815871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.815926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.815940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.820951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.821033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.821046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.826355] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.826420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.826433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.831418] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.831471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.831484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.836857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.836953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.836965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.842132] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.842221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.842234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.847664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.847700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.847714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.852943] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.852993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.853021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.858097] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.858157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.858172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.863095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.863171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.863186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.868064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.868127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.872848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.872898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.872911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.877867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.877901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.877913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.882675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.882726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.882739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.887719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.887786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.887814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.892953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.892993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.898072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.898121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.898152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.903105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.903165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.903179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.908044] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.908093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.908107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.913291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.913376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.913390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.918451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.918501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.918513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.923445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.923480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.923493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.928617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.928667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.928679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.933682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.933731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.933759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.938842] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.938893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.938906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.943766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.796 [2024-04-23 03:02:25.943846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.796 [2024-04-23 03:02:25.943859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:46.796 [2024-04-23 03:02:25.949061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:46.797 [2024-04-23 03:02:25.949112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:46.797 [2024-04-23 03:02:25.949138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.057 [2024-04-23 03:02:25.954343] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.057 [2024-04-23 03:02:25.954378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.057 [2024-04-23 03:02:25.954392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.057 [2024-04-23 03:02:25.959461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.057 [2024-04-23 03:02:25.959496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.057 [2024-04-23 03:02:25.959509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.057 [2024-04-23 03:02:25.964371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.964404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.964417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.969267] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.969332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.969346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.974634] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.974683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.974696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.979754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.979835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.979879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.984669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.984703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.984716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.989440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.989473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.989486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.994351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.994401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.994414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:25.999493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:25.999528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:25.999542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.004358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.004407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.004420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.009450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.009500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.009543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.014380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.014445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.014473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.018987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.019021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.019033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.024041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.024091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.024104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.028946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.029011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.029039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.033813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.033891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.038697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.038746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.038759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.043621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.043655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.043668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.048732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.048766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.048779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.053676] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.053709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.053722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.058815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.058850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.058863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.063501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.063536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.063549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.068549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.068598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.068629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.073582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.073617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.073630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.078565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.078599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.078612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.083461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.083498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.083511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.088220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.088252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.088265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.093166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.058 [2024-04-23 03:02:26.093212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.058 [2024-04-23 03:02:26.093225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.058 [2024-04-23 03:02:26.097965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.098015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.098044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.103266] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.103344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.103357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.108228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.108261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.108290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.113066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.113099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.113127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.117919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.117954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.117966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.122836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.122870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.122883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.127967] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.128017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.128029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.133124] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.133215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.138206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.138264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.138277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.143214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.143262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.143276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.148256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.148300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.148312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.153216] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.153276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.153319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.158450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.158498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.158511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.163234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.163310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.163324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.168566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.168601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.168613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.173163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.173222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.173250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.178130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.178200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.178214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.183895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.183946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.183959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.188680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.188729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.188742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.193926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.193959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.193971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.198991] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.199058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.199085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.203962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.203994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.204022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.209040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.209074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.209101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.059 [2024-04-23 03:02:26.214179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.059 [2024-04-23 03:02:26.214240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.059 [2024-04-23 03:02:26.214255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.219038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.219120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.219133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.224062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.224110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.224123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.228965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.229015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.229028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.233715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.233781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.233794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.238623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.238656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.238668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.243586] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.243621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.243635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.248832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.248882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.248910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.254039] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.254088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.254101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.259053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.259102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.259116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.263955] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.264004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.264016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.268857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.268891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.268903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.273690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.273723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.273734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.278528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.278595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.278623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.283824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.283872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.283886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.288812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.288847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.288860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.293774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.293823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.293835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.298959] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.299005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.303805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.303839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.303852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.308650] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.308683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.308696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.313742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.313791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.321 [2024-04-23 03:02:26.313803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.321 [2024-04-23 03:02:26.318834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.321 [2024-04-23 03:02:26.318884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.318897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.323750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.323816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.323843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.328519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.328584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.328596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.333193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.333253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.333266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.338096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.338155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.338169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.342983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.343034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.343047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.348003] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.348038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.348051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.353151] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.353196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.353210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.357824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.357859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.357872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.362241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.362307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.362321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.366945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.366981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.366994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.371866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.371902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.371915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.376358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.376391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.376404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.381311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.381359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.381389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.386386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.386449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.386465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.391232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.391288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.391302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.396288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.396338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.396351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.400975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.401011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.401024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.405801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.405837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.405851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.410584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.410619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.410633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.415201] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.415236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.415249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.419766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.419801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.419813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.424439] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.424474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.322 [2024-04-23 03:02:26.424488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.322 [2024-04-23 03:02:26.428910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.322 [2024-04-23 03:02:26.428962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.428975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.433672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.433723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.433736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.438374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.438408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.438421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.443144] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.443189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.443204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.447660] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.447695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.447709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.452537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.452586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.452599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.457673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.457722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.457750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.463035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.463086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.463099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.468089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.468138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.468163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.473322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.473388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.473416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.323 [2024-04-23 03:02:26.478340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.323 [2024-04-23 03:02:26.478373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.323 [2024-04-23 03:02:26.478386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.483481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.483516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.483529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.487931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.487997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.488010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.493154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.493228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.493241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.498148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.498194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.498209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.502924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.502976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.502990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.507698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.507734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.512885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.512935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.512948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.518081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.518173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.518187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.523251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.523312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.523326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.527928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.527978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.528006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.533302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.533362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.533376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.538686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.538734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.538762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.543998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.544064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.544092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.549383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.549431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.549444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.554585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.554619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.554631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.630 [2024-04-23 03:02:26.559549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.630 [2024-04-23 03:02:26.559585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.630 [2024-04-23 03:02:26.559598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.564703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.564752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.564765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.569607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.569676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.569706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.574975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.575025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.575038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.580260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.580310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.580322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.585258] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.585319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.590127] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.590184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.590198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.595309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.595388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.595402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.600345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.600394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.600407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.605412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.605492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.610627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.610676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.610689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.615559] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.615594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.615608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.620551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.620638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.620667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.625871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.625935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.630916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.630966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.630980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.636147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.636208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.636222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.640831] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.640916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.640943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.645835] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.645870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.645882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.650825] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.650891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.650904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.655802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.655851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.655865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.660693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.660727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.660739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.665761] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.665794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.665807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.670576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.670609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.670637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.675520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.675562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.675579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.680477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.680526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.680539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.685562] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.685611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.685661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.690490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.690541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.690569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.695846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.695911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.695924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.701174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.701220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.701234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.705941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.705980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.706000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.710729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.710767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.710779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.715401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.715449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.715463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.719917] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.719952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.719965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.724686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.724737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.724750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.730000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.730049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.730062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.735263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.735331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.735357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.740341] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.740377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.740390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.744864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.744900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.744914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.749663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.749711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.749723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.755065] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.755114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.755143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.760280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.760387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.760400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.765451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.765500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.765513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.770705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.770755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.770767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.776062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.776112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.776125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.781188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.781279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.781293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.631 [2024-04-23 03:02:26.786231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.631 [2024-04-23 03:02:26.786294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.631 [2024-04-23 03:02:26.786308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.791581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.791616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.791629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.796501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.796583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.796596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.801566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.801616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.801629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.806442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.806478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.806491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.811241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.811290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.811319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.816051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.816102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.816115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.820927] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.820978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.820990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.826005] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.826057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.826084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.830867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.830909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.830929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.835504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.835538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.835552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.840498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.840564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.840577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.845599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.845648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.845661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.850675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.850725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.850739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.855712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.855748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.855761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.860749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.860797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.860810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.866066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.866116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.866129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.871506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.871540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.871553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.876512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.876561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.876573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.881666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.881702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.881715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.886713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.886747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.892 [2024-04-23 03:02:26.886760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.892 [2024-04-23 03:02:26.892053] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.892 [2024-04-23 03:02:26.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.892101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.897115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.897209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.897223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.902389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.902436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.902449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.907448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.907482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.907495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.912455] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.912489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.912502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.917288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.917329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.917372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.922492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.922542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.922571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.927429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.927463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.927476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.932479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.932542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.932555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.937442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.937475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.937505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.942386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.942420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.942432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.947477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.947512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.947526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.952352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.952401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.952415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.957544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.957595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.957608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.962538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.962572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.962584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.967059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.967097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.967119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.971961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.972027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.972041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.976789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.976825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.976837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.981552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.981588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.981602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.986289] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.986339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.986352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.991081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.991172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.991185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:26.995986] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:26.996054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:26.996068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.001218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.001279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:27.001293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.006535] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.006585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:27.006598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.011764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.011849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:27.011863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.016482] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.016517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:27.016530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.021619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.021670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.893 [2024-04-23 03:02:27.021684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.893 [2024-04-23 03:02:27.026865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.893 [2024-04-23 03:02:27.026913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.894 [2024-04-23 03:02:27.026926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:47.894 [2024-04-23 03:02:27.032429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.894 [2024-04-23 03:02:27.032479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.894 [2024-04-23 03:02:27.032492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:47.894 [2024-04-23 03:02:27.037963] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.894 [2024-04-23 03:02:27.038043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.894 [2024-04-23 03:02:27.038056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:47.894 [2024-04-23 03:02:27.043306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.894 [2024-04-23 03:02:27.043368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.894 [2024-04-23 03:02:27.043382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:47.894 [2024-04-23 03:02:27.048587] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:47.894 [2024-04-23 03:02:27.048638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:47.894 [2024-04-23 03:02:27.048652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.154 [2024-04-23 03:02:27.053728] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.154 [2024-04-23 03:02:27.053777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.154 [2024-04-23 03:02:27.053820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.154 [2024-04-23 03:02:27.058699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.154 [2024-04-23 03:02:27.058763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.154 [2024-04-23 03:02:27.058777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.154 [2024-04-23 03:02:27.064057] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.154 [2024-04-23 03:02:27.064092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.064104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.069062] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.069172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.069187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.074318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.074368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.074380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.079397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.079475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.079489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.084628] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.084710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.084723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.089849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.089900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.089913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.094790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.094825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.094837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.099687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.099723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.099736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.104873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.104906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.104918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.109730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.109780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.109793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.114841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.114918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.114932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.119894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.119961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.119974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.125238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.125298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.125312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.130544] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.130614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.130642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.135731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.135766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.135780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.141102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.141163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.141194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.146122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.146197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.146211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.151269] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.151330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.151358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.156103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.156166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.156180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.161222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.161265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.161298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.166505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.166553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.166597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.171395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.171457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.171472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.176582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.176615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.176644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.181669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.181734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.181761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.186822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.186856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.186868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.192058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.192107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.192120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.196997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.197062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.155 [2024-04-23 03:02:27.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.155 [2024-04-23 03:02:27.202186] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.155 [2024-04-23 03:02:27.202247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.202262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.207222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.207297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.207311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.212100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.212161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.212175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.216797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.216876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.221840] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.221891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.226790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.226839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.226853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.231809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.231859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.236616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.236664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.236694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.241686] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.241736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.241749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.246675] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.246760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.246774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.251869] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.251917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.251963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.256859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.256894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.256922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.262063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.262112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.262124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.266964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.267035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.267067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.272022] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.272072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.272084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.276998] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.277079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.277093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.282075] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.282125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.282154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.287152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.287195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.287209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.292215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.292300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.292313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.297199] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.297273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.297286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.302052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.302102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.302115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.307084] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.307118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.307142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.156 [2024-04-23 03:02:27.312227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.156 [2024-04-23 03:02:27.312286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.156 [2024-04-23 03:02:27.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.317533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.317568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.317581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.322352] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.322413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.322428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.327380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.327423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.327453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.332312] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.332393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.332405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.337547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.337599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.337617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.342804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.342853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.342865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.348032] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.353000] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.353032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.353061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.358430] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.358495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.358509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.363551] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.363585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.363598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.368645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.368694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.368707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.373624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.373673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.373685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.378527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.378574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.383140] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.383232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.387738] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.387774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.387788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.392364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.392413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.392426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.397033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.397068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.397081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.401824] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.401860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.401875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.406928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.406965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.406978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.412046] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.412082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.412095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.417166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.417256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.417269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.422592] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.422641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.422670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.428214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.428310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.428339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.433348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.433398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.433412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.438364] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.438403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.438416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.443009] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.443073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.443102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.447884] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.447934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.447946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.452750] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.452811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.452825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.457919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.457967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.457979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.463286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.463346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.463375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.468539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.468573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.468585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.473616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.473665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.473678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.478619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.478667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.478679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.483496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.483531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.483544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.488513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.488545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.488557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.493456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.493489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.493500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.498263] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.498311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.498340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.503227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.417 [2024-04-23 03:02:27.503261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.417 [2024-04-23 03:02:27.503274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.417 [2024-04-23 03:02:27.508079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.508134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.508179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.512942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.512991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.513019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.517732] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.517787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.517800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.522520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.522555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.522568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.527553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.527588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.527602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.532723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.532774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.532789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.537976] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.538026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.538039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.542754] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.542791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.542804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.547454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.547489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.547502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.552325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.552359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.552372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.557188] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.557264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.557279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.562275] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.562321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.562335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.567279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.567313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.567326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.418 [2024-04-23 03:02:27.571928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.418 [2024-04-23 03:02:27.571964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.418 [2024-04-23 03:02:27.571976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.677 [2024-04-23 03:02:27.576649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.677 [2024-04-23 03:02:27.576714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.677 [2024-04-23 03:02:27.576727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.677 [2024-04-23 03:02:27.581560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.677 [2024-04-23 03:02:27.581594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.677 [2024-04-23 03:02:27.581607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.677 [2024-04-23 03:02:27.586673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.677 [2024-04-23 03:02:27.586709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.677 [2024-04-23 03:02:27.586722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.677 [2024-04-23 03:02:27.591606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.677 [2024-04-23 03:02:27.591641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.591654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.596802] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.596852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.596879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.601921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.601983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.607264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.607328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.607342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.612445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.612493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.612507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.617682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.617731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.617744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.622892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.622942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.622972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.627833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.627897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.627910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.632853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.632919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.632932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.637796] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.637847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.637860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.643045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.643082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.643095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.647565] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.647599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.647612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.652059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.652095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.652108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.656646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.656683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.656696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.661321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.661356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.661369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.666156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.666195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.666208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.670911] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.670948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.670961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.675688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.675722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.675736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.680298] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.680333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.680346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.685040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.685077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.685090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.689759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.689795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.689809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.694391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.694426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.694440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.699131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.699240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.699254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.703890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.703926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.703939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.708553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.708588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.708601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.713682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.713731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.713744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.719001] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.678 [2024-04-23 03:02:27.719066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.678 [2024-04-23 03:02:27.719079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.678 [2024-04-23 03:02:27.724256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.724300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.724313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.729533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.729582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.729596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.734636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.734685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.734728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.739759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.739826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.739854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.744892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.744958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.745002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.750002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.750053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.750066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.754832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.754898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.754911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.759510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.759545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.759558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.764402] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.764463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.764477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.769272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.769305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.769317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.774029] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.774078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.774091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.778832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.778866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.778878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.783666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.783702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.783716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.788760] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.788795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.788808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.793633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.793667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.793696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.798775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.798809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.798838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:48.679 [2024-04-23 03:02:27.803669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7a430) 00:17:48.679 [2024-04-23 03:02:27.803705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:48.679 [2024-04-23 03:02:27.803718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:48.679 00:17:48.679 Latency(us) 00:17:48.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:48.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:48.679 nvme0n1 : 2.00 6180.38 772.55 0.00 0.00 2584.64 2100.13 6523.81 00:17:48.679 =================================================================================================================== 00:17:48.679 Total : 6180.38 772.55 0.00 0.00 2584.64 2100.13 6523.81 00:17:48.679 0 00:17:48.679 03:02:27 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:48.679 03:02:27 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:48.679 03:02:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:48.679 03:02:27 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:48.679 | .driver_specific 00:17:48.679 | .nvme_error 00:17:48.679 | .status_code 00:17:48.679 | .command_transient_transport_error' 00:17:49.247 03:02:28 -- host/digest.sh@71 -- # (( 399 > 0 )) 00:17:49.247 03:02:28 -- host/digest.sh@73 -- # killprocess 92368 00:17:49.247 03:02:28 -- common/autotest_common.sh@936 -- # '[' -z 92368 ']' 00:17:49.247 03:02:28 -- common/autotest_common.sh@940 -- # kill -0 92368 00:17:49.247 03:02:28 -- common/autotest_common.sh@941 -- # uname 00:17:49.247 03:02:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:49.247 03:02:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92368 00:17:49.247 03:02:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:49.247 03:02:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:49.247 killing process with pid 92368 00:17:49.247 03:02:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92368' 00:17:49.247 Received shutdown signal, test time was about 2.000000 seconds 00:17:49.247 00:17:49.247 Latency(us) 00:17:49.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:49.247 =================================================================================================================== 00:17:49.247 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:49.247 03:02:28 -- common/autotest_common.sh@955 -- # kill 92368 00:17:49.247 03:02:28 -- common/autotest_common.sh@960 -- # wait 92368 00:17:49.247 03:02:28 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:49.247 03:02:28 -- host/digest.sh@54 -- # local rw bs qd 00:17:49.247 03:02:28 -- host/digest.sh@56 -- # rw=randwrite 00:17:49.247 03:02:28 -- host/digest.sh@56 -- # bs=4096 00:17:49.247 03:02:28 -- host/digest.sh@56 -- # qd=128 00:17:49.247 03:02:28 -- host/digest.sh@58 -- # bperfpid=92415 00:17:49.247 03:02:28 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:49.247 03:02:28 -- host/digest.sh@60 -- # waitforlisten 92415 /var/tmp/bperf.sock 00:17:49.247 03:02:28 -- common/autotest_common.sh@817 -- # '[' -z 92415 ']' 00:17:49.247 03:02:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:49.247 03:02:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:49.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:49.247 03:02:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:49.247 03:02:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:49.247 03:02:28 -- common/autotest_common.sh@10 -- # set +x 00:17:49.247 [2024-04-23 03:02:28.332654] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:49.247 [2024-04-23 03:02:28.332724] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92415 ] 00:17:49.506 [2024-04-23 03:02:28.451442] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:49.506 [2024-04-23 03:02:28.469583] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.506 [2024-04-23 03:02:28.506433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.506 03:02:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.506 03:02:28 -- common/autotest_common.sh@850 -- # return 0 00:17:49.506 03:02:28 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:49.506 03:02:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:49.764 03:02:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:49.764 03:02:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:49.764 03:02:28 -- common/autotest_common.sh@10 -- # set +x 00:17:49.764 03:02:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:49.764 03:02:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:49.764 03:02:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:50.023 nvme0n1 00:17:50.023 03:02:29 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:50.023 03:02:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:50.023 03:02:29 -- common/autotest_common.sh@10 -- # set +x 00:17:50.023 03:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:50.023 03:02:29 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:50.023 03:02:29 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:50.281 Running I/O for 2 seconds... 00:17:50.281 [2024-04-23 03:02:29.321704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fef90 00:17:50.281 [2024-04-23 03:02:29.324752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-04-23 03:02:29.324823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:50.281 [2024-04-23 03:02:29.340747] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190feb58 00:17:50.281 [2024-04-23 03:02:29.343805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-04-23 03:02:29.343870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:50.281 [2024-04-23 03:02:29.359731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fe2e8 00:17:50.281 [2024-04-23 03:02:29.362753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-04-23 03:02:29.362801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:50.281 [2024-04-23 03:02:29.378706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fda78 00:17:50.281 [2024-04-23 03:02:29.381467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-04-23 03:02:29.381491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:50.281 [2024-04-23 03:02:29.396366] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fd208 00:17:50.281 [2024-04-23 03:02:29.398928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.281 [2024-04-23 03:02:29.398975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:50.282 [2024-04-23 03:02:29.415176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fc998 00:17:50.282 [2024-04-23 03:02:29.418167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.282 [2024-04-23 03:02:29.418219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:50.282 [2024-04-23 03:02:29.434188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fc128 00:17:50.282 [2024-04-23 03:02:29.436924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.282 [2024-04-23 03:02:29.436959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:50.540 [2024-04-23 03:02:29.453128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fb8b8 00:17:50.540 [2024-04-23 03:02:29.455989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.540 [2024-04-23 03:02:29.456038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:50.540 [2024-04-23 03:02:29.472750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fb048 00:17:50.540 [2024-04-23 03:02:29.475580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.540 [2024-04-23 03:02:29.475614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.491464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190fa7d8 00:17:50.541 [2024-04-23 03:02:29.494240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.494280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.510530] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f9f68 00:17:50.541 [2024-04-23 03:02:29.513427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.513475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.529637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f96f8 00:17:50.541 [2024-04-23 03:02:29.532669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.532717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.548879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f8e88 00:17:50.541 [2024-04-23 03:02:29.551505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.551539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.567629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f8618 00:17:50.541 [2024-04-23 03:02:29.570228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.570283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.585839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f7da8 00:17:50.541 [2024-04-23 03:02:29.588565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.588604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.604474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f7538 00:17:50.541 [2024-04-23 03:02:29.607308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.607383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.623018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f6cc8 00:17:50.541 [2024-04-23 03:02:29.625690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:17172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.625722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.641470] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f6458 00:17:50.541 [2024-04-23 03:02:29.644095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.644165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.659893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f5be8 00:17:50.541 [2024-04-23 03:02:29.662676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.662740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.678519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f5378 00:17:50.541 [2024-04-23 03:02:29.680940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.680970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:50.541 [2024-04-23 03:02:29.696950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f4b08 00:17:50.541 [2024-04-23 03:02:29.699534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.541 [2024-04-23 03:02:29.699566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.715932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f4298 00:17:50.800 [2024-04-23 03:02:29.718367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.718413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.733930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f3a28 00:17:50.800 [2024-04-23 03:02:29.736322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.736353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.752454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f31b8 00:17:50.800 [2024-04-23 03:02:29.754869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:3099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.754915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.770798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f2948 00:17:50.800 [2024-04-23 03:02:29.773226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.773278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.788790] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f20d8 00:17:50.800 [2024-04-23 03:02:29.791045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.791075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.807033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f1868 00:17:50.800 [2024-04-23 03:02:29.809546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.809593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.825661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f0ff8 00:17:50.800 [2024-04-23 03:02:29.828006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.828036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.844678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f0788 00:17:50.800 [2024-04-23 03:02:29.847036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:15986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.847097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.863580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eff18 00:17:50.800 [2024-04-23 03:02:29.865672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.865720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.881971] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ef6a8 00:17:50.800 [2024-04-23 03:02:29.884231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.884278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.900892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eee38 00:17:50.800 [2024-04-23 03:02:29.903016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.903049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.919400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ee5c8 00:17:50.800 [2024-04-23 03:02:29.921574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.800 [2024-04-23 03:02:29.921621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:50.800 [2024-04-23 03:02:29.938558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190edd58 00:17:50.800 [2024-04-23 03:02:29.940761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:50.801 [2024-04-23 03:02:29.940808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:50.801 [2024-04-23 03:02:29.957611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ed4e8 00:17:51.060 [2024-04-23 03:02:29.959603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:29.959638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:29.976045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ecc78 00:17:51.060 [2024-04-23 03:02:29.978211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:29.978264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:29.995271] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ec408 00:17:51.060 [2024-04-23 03:02:29.997386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:4598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:29.997464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.016624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ebb98 00:17:51.060 [2024-04-23 03:02:30.019438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.019473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.036998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eb328 00:17:51.060 [2024-04-23 03:02:30.039126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.039211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.056189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eaab8 00:17:51.060 [2024-04-23 03:02:30.058251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.058306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.075360] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ea248 00:17:51.060 [2024-04-23 03:02:30.077476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.077523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.094120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e99d8 00:17:51.060 [2024-04-23 03:02:30.096136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:25292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.096206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.112077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e9168 00:17:51.060 [2024-04-23 03:02:30.113923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.113954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.130306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e88f8 00:17:51.060 [2024-04-23 03:02:30.132343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.132390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.149172] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e8088 00:17:51.060 [2024-04-23 03:02:30.151129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:20189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.151198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.167388] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e7818 00:17:51.060 [2024-04-23 03:02:30.169443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:9025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.169497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.186028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e6fa8 00:17:51.060 [2024-04-23 03:02:30.188072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.188117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:51.060 [2024-04-23 03:02:30.204794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e6738 00:17:51.060 [2024-04-23 03:02:30.206628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.060 [2024-04-23 03:02:30.206658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.223315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e5ec8 00:17:51.319 [2024-04-23 03:02:30.225182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.319 [2024-04-23 03:02:30.225233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.242038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e5658 00:17:51.319 [2024-04-23 03:02:30.243918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.319 [2024-04-23 03:02:30.243995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.260757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e4de8 00:17:51.319 [2024-04-23 03:02:30.262587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:1096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.319 [2024-04-23 03:02:30.262619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.279188] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e4578 00:17:51.319 [2024-04-23 03:02:30.280996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.319 [2024-04-23 03:02:30.281043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.298248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e3d08 00:17:51.319 [2024-04-23 03:02:30.300114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.319 [2024-04-23 03:02:30.300183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:51.319 [2024-04-23 03:02:30.316506] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e3498 00:17:51.320 [2024-04-23 03:02:30.318284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.318338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.334538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e2c28 00:17:51.320 [2024-04-23 03:02:30.336130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.336185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.352634] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e23b8 00:17:51.320 [2024-04-23 03:02:30.354253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.354306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.371397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e1b48 00:17:51.320 [2024-04-23 03:02:30.373041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.373072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.390512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e12d8 00:17:51.320 [2024-04-23 03:02:30.392082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.392129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.409250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e0a68 00:17:51.320 [2024-04-23 03:02:30.410793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.410841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.427800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e01f8 00:17:51.320 [2024-04-23 03:02:30.429326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.429358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.446311] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190df988 00:17:51.320 [2024-04-23 03:02:30.447801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.447864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:51.320 [2024-04-23 03:02:30.463832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190df118 00:17:51.320 [2024-04-23 03:02:30.465263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:24164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.320 [2024-04-23 03:02:30.465295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.482497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190de8a8 00:17:51.579 [2024-04-23 03:02:30.483948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.483981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.501827] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190de038 00:17:51.579 [2024-04-23 03:02:30.503245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.503277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.527642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190de038 00:17:51.579 [2024-04-23 03:02:30.530715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.530796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.546734] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190de8a8 00:17:51.579 [2024-04-23 03:02:30.549790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.549822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.566116] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190df118 00:17:51.579 [2024-04-23 03:02:30.569105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.569160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.585321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190df988 00:17:51.579 [2024-04-23 03:02:30.588250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.588291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.603875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e01f8 00:17:51.579 [2024-04-23 03:02:30.606643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.606691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.622719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e0a68 00:17:51.579 [2024-04-23 03:02:30.625506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.641741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e12d8 00:17:51.579 [2024-04-23 03:02:30.644459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.644536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.660228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e1b48 00:17:51.579 [2024-04-23 03:02:30.662942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.662973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.678664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e23b8 00:17:51.579 [2024-04-23 03:02:30.681427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.681473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.697180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e2c28 00:17:51.579 [2024-04-23 03:02:30.700054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.700086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.715737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e3498 00:17:51.579 [2024-04-23 03:02:30.718534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.718579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:51.579 [2024-04-23 03:02:30.734272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e3d08 00:17:51.579 [2024-04-23 03:02:30.737111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.579 [2024-04-23 03:02:30.737185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.753344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e4578 00:17:51.839 [2024-04-23 03:02:30.755845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.755875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.771904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e4de8 00:17:51.839 [2024-04-23 03:02:30.774316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.774360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.789878] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e5658 00:17:51.839 [2024-04-23 03:02:30.792465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.792511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.808393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e5ec8 00:17:51.839 [2024-04-23 03:02:30.811070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.811117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.827069] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e6738 00:17:51.839 [2024-04-23 03:02:30.829711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.829756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.845363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e6fa8 00:17:51.839 [2024-04-23 03:02:30.847786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.847863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.864026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e7818 00:17:51.839 [2024-04-23 03:02:30.866643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.839 [2024-04-23 03:02:30.866689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:51.839 [2024-04-23 03:02:30.882686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e8088 00:17:51.840 [2024-04-23 03:02:30.885147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.885216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.901645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e88f8 00:17:51.840 [2024-04-23 03:02:30.904209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.904279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.920699] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e9168 00:17:51.840 [2024-04-23 03:02:30.922958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.922988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.938941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190e99d8 00:17:51.840 [2024-04-23 03:02:30.941591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.941620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.957575] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ea248 00:17:51.840 [2024-04-23 03:02:30.959909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.959955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.975950] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eaab8 00:17:51.840 [2024-04-23 03:02:30.978331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.978378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:51.840 [2024-04-23 03:02:30.994499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eb328 00:17:51.840 [2024-04-23 03:02:30.996967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:51.840 [2024-04-23 03:02:30.996999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:52.099 [2024-04-23 03:02:31.013641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ebb98 00:17:52.100 [2024-04-23 03:02:31.015871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.015917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.032498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ec408 00:17:52.100 [2024-04-23 03:02:31.034838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.034884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.050729] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ecc78 00:17:52.100 [2024-04-23 03:02:31.053008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.053039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.068722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ed4e8 00:17:52.100 [2024-04-23 03:02:31.071095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.071165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.087280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190edd58 00:17:52.100 [2024-04-23 03:02:31.089357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:6489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.089419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.105862] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ee5c8 00:17:52.100 [2024-04-23 03:02:31.107981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.108012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.125152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eee38 00:17:52.100 [2024-04-23 03:02:31.127276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.127308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.143857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190ef6a8 00:17:52.100 [2024-04-23 03:02:31.146086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.146131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.163127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190eff18 00:17:52.100 [2024-04-23 03:02:31.165425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.165471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.182233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f0788 00:17:52.100 [2024-04-23 03:02:31.184329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:9566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.184390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.201558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f0ff8 00:17:52.100 [2024-04-23 03:02:31.203556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.203589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.219754] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f1868 00:17:52.100 [2024-04-23 03:02:31.221670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.221702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.238943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f20d8 00:17:52.100 [2024-04-23 03:02:31.241057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.100 [2024-04-23 03:02:31.241134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:52.100 [2024-04-23 03:02:31.257502] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f2948 00:17:52.360 [2024-04-23 03:02:31.259628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.360 [2024-04-23 03:02:31.259668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:52.360 [2024-04-23 03:02:31.276088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f31b8 00:17:52.360 [2024-04-23 03:02:31.278250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.360 [2024-04-23 03:02:31.278304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:52.360 [2024-04-23 03:02:31.294757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8960) with pdu=0x2000190f3a28 00:17:52.360 [2024-04-23 03:02:31.296753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:52.360 [2024-04-23 03:02:31.296816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:52.360 00:17:52.360 Latency(us) 00:17:52.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.360 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.360 nvme0n1 : 2.00 13524.26 52.83 0.00 0.00 9455.67 2695.91 36700.16 00:17:52.360 =================================================================================================================== 00:17:52.360 Total : 13524.26 52.83 0.00 0.00 9455.67 2695.91 36700.16 00:17:52.360 0 00:17:52.360 03:02:31 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:52.360 03:02:31 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:52.360 | .driver_specific 00:17:52.360 | .nvme_error 00:17:52.360 | .status_code 00:17:52.360 | .command_transient_transport_error' 00:17:52.360 03:02:31 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:52.360 03:02:31 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:52.618 03:02:31 -- host/digest.sh@71 -- # (( 106 > 0 )) 00:17:52.618 03:02:31 -- host/digest.sh@73 -- # killprocess 92415 00:17:52.619 03:02:31 -- common/autotest_common.sh@936 -- # '[' -z 92415 ']' 00:17:52.619 03:02:31 -- common/autotest_common.sh@940 -- # kill -0 92415 00:17:52.619 03:02:31 -- common/autotest_common.sh@941 -- # uname 00:17:52.619 03:02:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:52.619 03:02:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92415 00:17:52.619 03:02:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:52.619 03:02:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:52.619 killing process with pid 92415 00:17:52.619 03:02:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92415' 00:17:52.619 03:02:31 -- common/autotest_common.sh@955 -- # kill 92415 00:17:52.619 Received shutdown signal, test time was about 2.000000 seconds 00:17:52.619 00:17:52.619 Latency(us) 00:17:52.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.619 =================================================================================================================== 00:17:52.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:52.619 03:02:31 -- common/autotest_common.sh@960 -- # wait 92415 00:17:52.878 03:02:31 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:52.878 03:02:31 -- host/digest.sh@54 -- # local rw bs qd 00:17:52.878 03:02:31 -- host/digest.sh@56 -- # rw=randwrite 00:17:52.878 03:02:31 -- host/digest.sh@56 -- # bs=131072 00:17:52.878 03:02:31 -- host/digest.sh@56 -- # qd=16 00:17:52.878 03:02:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:52.878 03:02:31 -- host/digest.sh@58 -- # bperfpid=92467 00:17:52.878 03:02:31 -- host/digest.sh@60 -- # waitforlisten 92467 /var/tmp/bperf.sock 00:17:52.878 03:02:31 -- common/autotest_common.sh@817 -- # '[' -z 92467 ']' 00:17:52.878 03:02:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:52.878 03:02:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:52.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:52.878 03:02:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:52.878 03:02:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:52.878 03:02:31 -- common/autotest_common.sh@10 -- # set +x 00:17:52.878 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:52.878 Zero copy mechanism will not be used. 00:17:52.878 [2024-04-23 03:02:31.825464] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:52.878 [2024-04-23 03:02:31.825566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92467 ] 00:17:52.878 [2024-04-23 03:02:31.945590] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:52.878 [2024-04-23 03:02:31.963364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.878 [2024-04-23 03:02:32.001012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:53.137 03:02:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:53.137 03:02:32 -- common/autotest_common.sh@850 -- # return 0 00:17:53.137 03:02:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.137 03:02:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:53.395 03:02:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:53.395 03:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.395 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:17:53.395 03:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.395 03:02:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.395 03:02:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:53.653 nvme0n1 00:17:53.653 03:02:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:53.653 03:02:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:53.653 03:02:32 -- common/autotest_common.sh@10 -- # set +x 00:17:53.654 03:02:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:53.654 03:02:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:53.654 03:02:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:53.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:53.913 Zero copy mechanism will not be used. 00:17:53.913 Running I/O for 2 seconds... 00:17:53.913 [2024-04-23 03:02:32.847968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.913 [2024-04-23 03:02:32.848406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.913 [2024-04-23 03:02:32.848437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.913 [2024-04-23 03:02:32.854233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.913 [2024-04-23 03:02:32.854560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.913 [2024-04-23 03:02:32.854590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.913 [2024-04-23 03:02:32.860401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.860763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.860788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.866498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.866852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.866881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.872661] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.873058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.873087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.878802] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.879161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.879199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.884806] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.885211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.885259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.890952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.891328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.891364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.897114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.897512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.897571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.902957] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.903299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.903327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.909083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.909482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.909510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.915398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.915753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.915782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.920962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.921288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.921317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.927090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.927485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.927513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.933395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.933735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.933763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.939614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.939958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.940030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.945914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.946245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.946282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.951856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.952235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.952290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.958215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.958606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.958633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.964420] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.964846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.964874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.970819] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.971126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.971164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.977059] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.977412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.977440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.983337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.983679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.983708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.989324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.989652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.989699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:32.995163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:32.995509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:32.995537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:33.000720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:33.001023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:33.001052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:33.006977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:33.007355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:33.007383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:33.013238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:33.013635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:33.013662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:33.019685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.914 [2024-04-23 03:02:33.020033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.914 [2024-04-23 03:02:33.020060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.914 [2024-04-23 03:02:33.025732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.026064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.026093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.031363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.031745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.031773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.037590] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.037936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.037963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.043370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.043728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.043771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.049516] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.049942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.055889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.056223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.056261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.061979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.062367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.062394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:53.915 [2024-04-23 03:02:33.068127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:53.915 [2024-04-23 03:02:33.068510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:53.915 [2024-04-23 03:02:33.068538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.074131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.074488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.074515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.079906] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.080222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.080250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.086050] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.086469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.086497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.092422] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.092782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.092809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.098471] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.098782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.098809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.104371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.104744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.104772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.110054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.110405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.110433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.115781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.116105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.116157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.121740] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.122121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.122156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.127577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.127891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.127934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.133219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.133695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.133738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.139212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.139584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.139612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.144970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.145289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.145334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.150814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.151158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.151194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.157004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.157396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.157423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.163127] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.163526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.163553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.169529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.169919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.169946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.175618] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.175928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.175956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.181594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.181949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.182024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.187730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.188108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.188161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.193630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.193942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.193969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.199701] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.200012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.200054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.205518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.205888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.205959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.176 [2024-04-23 03:02:33.211459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.176 [2024-04-23 03:02:33.211779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.176 [2024-04-23 03:02:33.211820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.217239] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.217641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.217682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.223378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.223710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.223739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.229042] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.229422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.229465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.234898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.235276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.235313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.240983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.241336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.241373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.247346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.247699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.247727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.253432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.253792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.253833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.259683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.260067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.260094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.265759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.266094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.266146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.271514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.271865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.271892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.277738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.278131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.278185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.283773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.284157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.284193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.289750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.290149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.290185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.296080] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.296428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.296455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.302416] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.302770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.308685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.309035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.309108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.315091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.315485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.315514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.320935] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.321317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.321345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.327156] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.327547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.327576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.177 [2024-04-23 03:02:33.333384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.177 [2024-04-23 03:02:33.333775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.177 [2024-04-23 03:02:33.333803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.339612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.340014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.340042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.345833] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.346192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.346243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.352150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.352562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.352589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.358297] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.358705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.358748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.364717] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.365125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.370993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.371327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.371355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.376902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.377255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.377292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.382962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.383390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.383444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.388688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.389070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.389097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.394695] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.395085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.395128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.400654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.401031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.401058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.406660] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.407007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.437 [2024-04-23 03:02:33.407050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.437 [2024-04-23 03:02:33.412529] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.437 [2024-04-23 03:02:33.412884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.412910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.418613] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.418987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.419013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.424483] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.424834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.424906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.430247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.430604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.430632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.436490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.436794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.436821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.442604] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.442962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.442991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.448023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.448336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.448369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.453526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.453865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.453893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.459363] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.459723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.459752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.465243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.465553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.465581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.471319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.471669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.471697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.477153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.477494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.477523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.483519] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.483846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.483873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.489685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.489983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.490011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.495370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.495704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.495732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.500902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.501200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.501237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.506315] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.506644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.506672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.511832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.512145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.512173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.517286] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.517593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.517622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.522879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.523182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.523220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.528646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.529025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.529054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.534384] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.534736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.534764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.540036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.540400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.540428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.546061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.546437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.546464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.551839] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.552137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.552175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.558093] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.558444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.558471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.563764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.438 [2024-04-23 03:02:33.564067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.438 [2024-04-23 03:02:33.564095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.438 [2024-04-23 03:02:33.569112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.439 [2024-04-23 03:02:33.569427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.439 [2024-04-23 03:02:33.569454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.439 [2024-04-23 03:02:33.574497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.439 [2024-04-23 03:02:33.574801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.439 [2024-04-23 03:02:33.574830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.439 [2024-04-23 03:02:33.580045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.439 [2024-04-23 03:02:33.580360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.439 [2024-04-23 03:02:33.580388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.439 [2024-04-23 03:02:33.585899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.439 [2024-04-23 03:02:33.586256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.439 [2024-04-23 03:02:33.586295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.439 [2024-04-23 03:02:33.591501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.439 [2024-04-23 03:02:33.591803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.439 [2024-04-23 03:02:33.591845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.597176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.597514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.597541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.602993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.603380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.603409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.609444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.609796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.609824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.615270] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.615622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.615651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.621745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.622136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.622221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.627863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.628234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.628274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.634261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.634652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.634677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.640728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.641098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.641137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.646929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.647330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.647357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.652775] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.653147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.653214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.659063] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.659498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.659525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.664952] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.665302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.665330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.671033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.671388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.671456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.676896] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.677262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.677306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.682826] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.683189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.683239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.688795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.689174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.689212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.694732] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.695082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.695110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.700492] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.700819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.700848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.706991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.700 [2024-04-23 03:02:33.707300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.700 [2024-04-23 03:02:33.707328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.700 [2024-04-23 03:02:33.713096] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.713497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.713525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.719393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.719731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.719760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.725434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.725796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.725823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.731074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.731471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.731499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.737033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.737355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.737383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.743036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.743399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.743451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.748982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.749425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.749453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.754904] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.755234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.755260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.760924] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.761272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.761311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.766804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.767134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.767169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.772818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.773199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.773239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.778725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.779044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.779089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.784569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.784937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.784965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.790642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.790963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.790991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.796817] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.797138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.797174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.802834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.803155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.803193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.808750] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.809117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.809154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.814864] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.815192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.815246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.820614] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.820999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.826407] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.826730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.826758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.832192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.832590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.832616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.838376] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.838705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.838731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.844367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.844697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.844723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.850345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.850704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.850726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.701 [2024-04-23 03:02:33.856310] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.701 [2024-04-23 03:02:33.856674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.701 [2024-04-23 03:02:33.856711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.861768] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.862074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.862103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.867077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.867392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.867429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.872435] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.872735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.872762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.878047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.878462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.878490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.883828] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.884188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.884225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.889996] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.890343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.890371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.895764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.896152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.896190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.902076] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.902507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.902550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.908441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.908841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.908869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.914765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.915077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.915119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.920556] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.920930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.920958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.926456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.926827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.926854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.932355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.932748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.932789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.938150] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.938550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.938576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.943991] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.944356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.944383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.949947] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.950343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.950374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.956090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.956480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.956523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.962244] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.962662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.962688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.968485] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.968835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.968862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.975004] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.975394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.975457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.981460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.981831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.981858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.987607] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.987931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.987958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.993737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.969 [2024-04-23 03:02:33.994166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.969 [2024-04-23 03:02:33.994204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.969 [2024-04-23 03:02:33.999965] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.000363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.000391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.005812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.006212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.006249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.011678] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.011977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.012005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.016967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.017287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.017326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.022276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.022574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.022601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.027901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.028223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.028251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.033825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.034198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.034250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.040058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.040382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.040410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.046052] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.046433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.046461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.052474] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.052891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.052935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.058565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.058912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.058940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.064456] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.064799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.064826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.070504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.070936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.070964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.076735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.077131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.077185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.082966] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.083297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.083341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.089192] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.089516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.089543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.095219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.095609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.095637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.101272] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.101629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.101657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.107173] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.107506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.107535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.113375] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.113739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.113767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.119635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.120005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.120032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:54.970 [2024-04-23 03:02:34.125643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:54.970 [2024-04-23 03:02:34.125942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:54.970 [2024-04-23 03:02:34.125970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.131399] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.131707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.131735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.137389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.137754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.137781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.143314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.143621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.143649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.149160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.149569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.149612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.155232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.155606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.155634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.161086] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.161448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.161475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.167234] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.167628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.167656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.231 [2024-04-23 03:02:34.173428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.231 [2024-04-23 03:02:34.173755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.231 [2024-04-23 03:02:34.173781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.179398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.179725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.179752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.185617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.185990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.186017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.191985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.192298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.192328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.198118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.198516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.198543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.204440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.204806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.204846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.210772] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.211164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.211201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.216983] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.217303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.217331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.222873] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.223217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.223255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.229022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.229336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.229364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.235276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.235645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.235673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.241152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.241550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.241577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.247163] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.247496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.247523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.253306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.253656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.253684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.259674] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.260019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.260046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.266260] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.266659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.266717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.272682] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.273098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.273139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.278691] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.279075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.279102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.284577] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.284924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.284952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.290437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.290788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.290815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.296393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.296746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.296773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.302235] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.302555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.302582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.308020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.308343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.308371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.314145] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.314521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.314547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.320167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.320621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.320648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.326124] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.326486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.232 [2024-04-23 03:02:34.326511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.232 [2024-04-23 03:02:34.332210] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.232 [2024-04-23 03:02:34.332589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.332631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.338095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.338457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.338483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.344010] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.344377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.344404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.349852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.350223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.350262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.355782] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.356071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.356098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.361484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.361824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.361850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.367452] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.367751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.367793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.374197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.374535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.374562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.380094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.380471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.380498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.233 [2024-04-23 03:02:34.386238] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.233 [2024-04-23 03:02:34.386612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.233 [2024-04-23 03:02:34.386639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.392262] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.392680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.392737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.398668] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.399001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.399029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.404835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.405204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.405255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.411155] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.411569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.411597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.417133] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.417536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.417577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.423334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.423659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.423686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.429437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.429830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.429873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.435647] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.435979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.436005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.441561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.441980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.493 [2024-04-23 03:02:34.442022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.493 [2024-04-23 03:02:34.447748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.493 [2024-04-23 03:02:34.448149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.448199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.453794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.454162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.454197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.459855] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.460274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.460301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.466046] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.466414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.466441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.472037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.472408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.472451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.477960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.478297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.478340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.483752] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.484172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.484209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.489673] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.490042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.490069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.495730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.496111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.496162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.501748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.502074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.502112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.507656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.508021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.508043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.513437] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.513742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.513777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.519334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.519686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.519714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.525222] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.525550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.525578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.530841] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.531153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.531181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.536332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.536630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.536658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.541689] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.541989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.542022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.547444] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.547747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.547775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.553288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.553645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.553671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.559593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.559893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.559921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.565574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.565945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.565987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.571620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.571957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.571984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.577688] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.578070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.578098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.583697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.584089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.584142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.589748] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.590090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.590123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.595254] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.595566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.595595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.601218] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.601561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.494 [2024-04-23 03:02:34.601587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.494 [2024-04-23 03:02:34.607280] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.494 [2024-04-23 03:02:34.607599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.607627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.613459] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.613760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.613789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.619289] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.619617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.619645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.625022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.625336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.625364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.630397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.630699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.630726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.636464] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.636856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.636883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.642626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.643015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.643043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.495 [2024-04-23 03:02:34.648871] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.495 [2024-04-23 03:02:34.649218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.495 [2024-04-23 03:02:34.649257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.654848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.655164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.655201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.660400] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.660756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.660784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.665993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.666306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.671893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.672267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.672295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.678159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.678563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.678607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.684460] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.684801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.684830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.690517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.690906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.690949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.696449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.696751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.696779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.702247] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.702636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.702695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.708662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.708971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.709030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.714692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.715008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.715036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.720263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.720561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.720590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.726034] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.726376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.726404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.732081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.732401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.738186] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.738632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.738659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.744712] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.745095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.745124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.750907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.751229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.757126] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.757455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.757483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.763334] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.763670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.763699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.769501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.769864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.769892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.775552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.775872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.775926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.781643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.782012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.782054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.787932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.755 [2024-04-23 03:02:34.788261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.755 [2024-04-23 03:02:34.788289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.755 [2024-04-23 03:02:34.794215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.794537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.794564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.800340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.800772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.800845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.806789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.807179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.807231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.813154] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.813573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.813652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.819395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.819716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.819745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.825389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.825737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.825764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.831227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.831571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.831599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:55.756 [2024-04-23 03:02:34.837256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x8b8ca0) with pdu=0x2000190fef90 00:17:55.756 [2024-04-23 03:02:34.837674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:55.756 [2024-04-23 03:02:34.837701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:55.756 00:17:55.756 Latency(us) 00:17:55.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.756 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:55.756 nvme0n1 : 2.00 5154.73 644.34 0.00 0.00 3097.47 1906.50 6702.55 00:17:55.756 =================================================================================================================== 00:17:55.756 Total : 5154.73 644.34 0.00 0.00 3097.47 1906.50 6702.55 00:17:55.756 0 00:17:55.756 03:02:34 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:55.756 03:02:34 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:55.756 03:02:34 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:55.756 03:02:34 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:55.756 | .driver_specific 00:17:55.756 | .nvme_error 00:17:55.756 | .status_code 00:17:55.756 | .command_transient_transport_error' 00:17:56.015 03:02:35 -- host/digest.sh@71 -- # (( 332 > 0 )) 00:17:56.015 03:02:35 -- host/digest.sh@73 -- # killprocess 92467 00:17:56.015 03:02:35 -- common/autotest_common.sh@936 -- # '[' -z 92467 ']' 00:17:56.015 03:02:35 -- common/autotest_common.sh@940 -- # kill -0 92467 00:17:56.015 03:02:35 -- common/autotest_common.sh@941 -- # uname 00:17:56.015 03:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.015 03:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92467 00:17:56.015 03:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:56.015 03:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:56.015 killing process with pid 92467 00:17:56.015 03:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92467' 00:17:56.015 Received shutdown signal, test time was about 2.000000 seconds 00:17:56.015 00:17:56.015 Latency(us) 00:17:56.015 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.015 =================================================================================================================== 00:17:56.015 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.015 03:02:35 -- common/autotest_common.sh@955 -- # kill 92467 00:17:56.015 03:02:35 -- common/autotest_common.sh@960 -- # wait 92467 00:17:56.275 03:02:35 -- host/digest.sh@116 -- # killprocess 92291 00:17:56.275 03:02:35 -- common/autotest_common.sh@936 -- # '[' -z 92291 ']' 00:17:56.275 03:02:35 -- common/autotest_common.sh@940 -- # kill -0 92291 00:17:56.275 03:02:35 -- common/autotest_common.sh@941 -- # uname 00:17:56.275 03:02:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:56.275 03:02:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92291 00:17:56.275 03:02:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:56.275 03:02:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:56.275 killing process with pid 92291 00:17:56.275 03:02:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92291' 00:17:56.275 03:02:35 -- common/autotest_common.sh@955 -- # kill 92291 00:17:56.275 03:02:35 -- common/autotest_common.sh@960 -- # wait 92291 00:17:56.534 00:17:56.534 real 0m14.777s 00:17:56.534 user 0m28.629s 00:17:56.534 sys 0m4.423s 00:17:56.534 03:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:56.534 03:02:35 -- common/autotest_common.sh@10 -- # set +x 00:17:56.534 ************************************ 00:17:56.534 END TEST nvmf_digest_error 00:17:56.534 ************************************ 00:17:56.534 03:02:35 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:56.534 03:02:35 -- host/digest.sh@150 -- # nvmftestfini 00:17:56.534 03:02:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:56.534 03:02:35 -- nvmf/common.sh@117 -- # sync 00:17:56.534 03:02:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:56.534 03:02:35 -- nvmf/common.sh@120 -- # set +e 00:17:56.534 03:02:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:56.534 03:02:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:56.534 rmmod nvme_tcp 00:17:56.534 rmmod nvme_fabrics 00:17:56.534 rmmod nvme_keyring 00:17:56.534 03:02:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:56.534 03:02:35 -- nvmf/common.sh@124 -- # set -e 00:17:56.534 03:02:35 -- nvmf/common.sh@125 -- # return 0 00:17:56.534 03:02:35 -- nvmf/common.sh@478 -- # '[' -n 92291 ']' 00:17:56.534 03:02:35 -- nvmf/common.sh@479 -- # killprocess 92291 00:17:56.534 03:02:35 -- common/autotest_common.sh@936 -- # '[' -z 92291 ']' 00:17:56.534 03:02:35 -- common/autotest_common.sh@940 -- # kill -0 92291 00:17:56.534 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (92291) - No such process 00:17:56.534 03:02:35 -- common/autotest_common.sh@963 -- # echo 'Process with pid 92291 is not found' 00:17:56.534 Process with pid 92291 is not found 00:17:56.534 03:02:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:56.535 03:02:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:56.535 03:02:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:56.535 03:02:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.535 03:02:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:56.535 03:02:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.535 03:02:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.535 03:02:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.794 03:02:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:17:56.794 00:17:56.794 real 0m30.323s 00:17:56.794 user 0m57.278s 00:17:56.794 sys 0m9.084s 00:17:56.794 03:02:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:56.794 03:02:35 -- common/autotest_common.sh@10 -- # set +x 00:17:56.794 ************************************ 00:17:56.794 END TEST nvmf_digest 00:17:56.794 ************************************ 00:17:56.794 03:02:35 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:17:56.794 03:02:35 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:17:56.794 03:02:35 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:56.794 03:02:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:56.794 03:02:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:56.794 03:02:35 -- common/autotest_common.sh@10 -- # set +x 00:17:56.794 ************************************ 00:17:56.794 START TEST nvmf_multipath 00:17:56.794 ************************************ 00:17:56.794 03:02:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:56.794 * Looking for test storage... 00:17:56.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:56.794 03:02:35 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.794 03:02:35 -- nvmf/common.sh@7 -- # uname -s 00:17:56.794 03:02:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.794 03:02:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.794 03:02:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.794 03:02:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.794 03:02:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.794 03:02:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.794 03:02:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.794 03:02:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.794 03:02:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.794 03:02:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.794 03:02:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:17:56.794 03:02:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:17:56.794 03:02:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.794 03:02:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.794 03:02:35 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.794 03:02:35 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:56.794 03:02:35 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.794 03:02:35 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.794 03:02:35 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.794 03:02:35 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.794 03:02:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.794 03:02:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.794 03:02:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.794 03:02:35 -- paths/export.sh@5 -- # export PATH 00:17:56.794 03:02:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.794 03:02:35 -- nvmf/common.sh@47 -- # : 0 00:17:56.794 03:02:35 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:56.794 03:02:35 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:56.795 03:02:35 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:56.795 03:02:35 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.795 03:02:35 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.795 03:02:35 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:56.795 03:02:35 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:56.795 03:02:35 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:56.795 03:02:35 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:56.795 03:02:35 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:56.795 03:02:35 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:56.795 03:02:35 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:56.795 03:02:35 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.795 03:02:35 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:56.795 03:02:35 -- host/multipath.sh@30 -- # nvmftestinit 00:17:56.795 03:02:35 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:56.795 03:02:35 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.795 03:02:35 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:56.795 03:02:35 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:56.795 03:02:35 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:56.795 03:02:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.795 03:02:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.795 03:02:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.054 03:02:35 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:17:57.054 03:02:35 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:17:57.054 03:02:35 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:17:57.054 03:02:35 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:17:57.054 03:02:35 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:17:57.054 03:02:35 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:17:57.054 03:02:35 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.054 03:02:35 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:57.054 03:02:35 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:57.054 03:02:35 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:17:57.054 03:02:35 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.054 03:02:35 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.054 03:02:35 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.054 03:02:35 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.054 03:02:35 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.054 03:02:35 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.054 03:02:35 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.054 03:02:35 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.054 03:02:35 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:17:57.054 03:02:35 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:17:57.054 Cannot find device "nvmf_tgt_br" 00:17:57.054 03:02:35 -- nvmf/common.sh@155 -- # true 00:17:57.054 03:02:35 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.054 Cannot find device "nvmf_tgt_br2" 00:17:57.054 03:02:35 -- nvmf/common.sh@156 -- # true 00:17:57.054 03:02:35 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:17:57.054 03:02:35 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:17:57.054 Cannot find device "nvmf_tgt_br" 00:17:57.054 03:02:36 -- nvmf/common.sh@158 -- # true 00:17:57.054 03:02:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:17:57.054 Cannot find device "nvmf_tgt_br2" 00:17:57.054 03:02:36 -- nvmf/common.sh@159 -- # true 00:17:57.054 03:02:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:17:57.054 03:02:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:17:57.054 03:02:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.054 03:02:36 -- nvmf/common.sh@162 -- # true 00:17:57.054 03:02:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.054 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.054 03:02:36 -- nvmf/common.sh@163 -- # true 00:17:57.054 03:02:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.054 03:02:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.054 03:02:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.054 03:02:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.054 03:02:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.054 03:02:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.054 03:02:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.054 03:02:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.054 03:02:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.054 03:02:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:17:57.054 03:02:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:17:57.054 03:02:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:17:57.054 03:02:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:17:57.054 03:02:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.054 03:02:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.054 03:02:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.054 03:02:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:17:57.054 03:02:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:17:57.054 03:02:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.313 03:02:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.313 03:02:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.313 03:02:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.313 03:02:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.313 03:02:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:17:57.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:17:57.313 00:17:57.313 --- 10.0.0.2 ping statistics --- 00:17:57.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.313 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:17:57.313 03:02:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:17:57.313 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.313 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:17:57.313 00:17:57.313 --- 10.0.0.3 ping statistics --- 00:17:57.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.313 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:17:57.313 03:02:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:57.313 00:17:57.313 --- 10.0.0.1 ping statistics --- 00:17:57.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.313 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:57.313 03:02:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.313 03:02:36 -- nvmf/common.sh@422 -- # return 0 00:17:57.313 03:02:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:57.313 03:02:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.313 03:02:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:57.313 03:02:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:57.313 03:02:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.313 03:02:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:57.313 03:02:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:57.313 03:02:36 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:57.313 03:02:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:57.313 03:02:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:57.313 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:17:57.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.313 03:02:36 -- nvmf/common.sh@470 -- # nvmfpid=92721 00:17:57.313 03:02:36 -- nvmf/common.sh@471 -- # waitforlisten 92721 00:17:57.313 03:02:36 -- common/autotest_common.sh@817 -- # '[' -z 92721 ']' 00:17:57.313 03:02:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:57.313 03:02:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.313 03:02:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:57.313 03:02:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.313 03:02:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:57.313 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:17:57.313 [2024-04-23 03:02:36.354921] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:17:57.313 [2024-04-23 03:02:36.355010] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.572 [2024-04-23 03:02:36.482237] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:17:57.572 [2024-04-23 03:02:36.496751] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:57.572 [2024-04-23 03:02:36.537363] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.572 [2024-04-23 03:02:36.537573] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.572 [2024-04-23 03:02:36.537708] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.572 [2024-04-23 03:02:36.537831] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.572 [2024-04-23 03:02:36.537873] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.572 [2024-04-23 03:02:36.538143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.572 [2024-04-23 03:02:36.538158] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.572 03:02:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:57.572 03:02:36 -- common/autotest_common.sh@850 -- # return 0 00:17:57.572 03:02:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:57.572 03:02:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:57.572 03:02:36 -- common/autotest_common.sh@10 -- # set +x 00:17:57.572 03:02:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:57.572 03:02:36 -- host/multipath.sh@33 -- # nvmfapp_pid=92721 00:17:57.572 03:02:36 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:57.830 [2024-04-23 03:02:36.922137] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.830 03:02:36 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:58.088 Malloc0 00:17:58.088 03:02:37 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:58.656 03:02:37 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.657 03:02:37 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.915 [2024-04-23 03:02:38.030377] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.915 03:02:38 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:59.174 [2024-04-23 03:02:38.270574] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:59.174 03:02:38 -- host/multipath.sh@44 -- # bdevperf_pid=92769 00:17:59.174 03:02:38 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:59.174 03:02:38 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:59.174 03:02:38 -- host/multipath.sh@47 -- # waitforlisten 92769 /var/tmp/bdevperf.sock 00:17:59.174 03:02:38 -- common/autotest_common.sh@817 -- # '[' -z 92769 ']' 00:17:59.174 03:02:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:59.174 03:02:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:59.174 03:02:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:59.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:59.174 03:02:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:59.174 03:02:38 -- common/autotest_common.sh@10 -- # set +x 00:17:59.434 03:02:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.434 03:02:38 -- common/autotest_common.sh@850 -- # return 0 00:17:59.434 03:02:38 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:59.692 03:02:38 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:18:00.260 Nvme0n1 00:18:00.260 03:02:39 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:18:00.520 Nvme0n1 00:18:00.520 03:02:39 -- host/multipath.sh@78 -- # sleep 1 00:18:00.520 03:02:39 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.457 03:02:40 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:18:01.457 03:02:40 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:01.716 03:02:40 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:01.975 03:02:41 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:18:01.975 03:02:41 -- host/multipath.sh@65 -- # dtrace_pid=92807 00:18:01.975 03:02:41 -- host/multipath.sh@66 -- # sleep 6 00:18:01.975 03:02:41 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:08.556 03:02:47 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:08.556 03:02:47 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:08.556 03:02:47 -- host/multipath.sh@67 -- # active_port=4421 00:18:08.556 03:02:47 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.556 Attaching 4 probes... 00:18:08.556 @path[10.0.0.2, 4421]: 15525 00:18:08.556 @path[10.0.0.2, 4421]: 15784 00:18:08.556 @path[10.0.0.2, 4421]: 15846 00:18:08.556 @path[10.0.0.2, 4421]: 15875 00:18:08.556 @path[10.0.0.2, 4421]: 15852 00:18:08.556 03:02:47 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:08.557 03:02:47 -- host/multipath.sh@69 -- # sed -n 1p 00:18:08.557 03:02:47 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:08.557 03:02:47 -- host/multipath.sh@69 -- # port=4421 00:18:08.557 03:02:47 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:08.557 03:02:47 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:08.557 03:02:47 -- host/multipath.sh@72 -- # kill 92807 00:18:08.557 03:02:47 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:08.557 03:02:47 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:18:08.557 03:02:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:08.557 03:02:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:08.815 03:02:47 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:18:08.815 03:02:47 -- host/multipath.sh@65 -- # dtrace_pid=92925 00:18:08.815 03:02:47 -- host/multipath.sh@66 -- # sleep 6 00:18:08.815 03:02:47 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:15.374 03:02:53 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:15.374 03:02:53 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:15.374 03:02:54 -- host/multipath.sh@67 -- # active_port=4420 00:18:15.374 03:02:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.374 Attaching 4 probes... 00:18:15.374 @path[10.0.0.2, 4420]: 16023 00:18:15.374 @path[10.0.0.2, 4420]: 16076 00:18:15.374 @path[10.0.0.2, 4420]: 16018 00:18:15.374 @path[10.0.0.2, 4420]: 17308 00:18:15.374 @path[10.0.0.2, 4420]: 18600 00:18:15.374 03:02:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:15.374 03:02:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:15.374 03:02:54 -- host/multipath.sh@69 -- # sed -n 1p 00:18:15.374 03:02:54 -- host/multipath.sh@69 -- # port=4420 00:18:15.374 03:02:54 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:15.374 03:02:54 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:15.374 03:02:54 -- host/multipath.sh@72 -- # kill 92925 00:18:15.374 03:02:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:15.374 03:02:54 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:15.374 03:02:54 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:15.374 03:02:54 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:15.632 03:02:54 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:15.632 03:02:54 -- host/multipath.sh@65 -- # dtrace_pid=93037 00:18:15.632 03:02:54 -- host/multipath.sh@66 -- # sleep 6 00:18:15.632 03:02:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:22.196 03:03:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:22.196 03:03:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:22.196 03:03:00 -- host/multipath.sh@67 -- # active_port=4421 00:18:22.196 03:03:00 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.196 Attaching 4 probes... 00:18:22.196 @path[10.0.0.2, 4421]: 12376 00:18:22.196 @path[10.0.0.2, 4421]: 15627 00:18:22.196 @path[10.0.0.2, 4421]: 15631 00:18:22.196 @path[10.0.0.2, 4421]: 15721 00:18:22.196 @path[10.0.0.2, 4421]: 15670 00:18:22.196 03:03:00 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:22.196 03:03:00 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:22.196 03:03:00 -- host/multipath.sh@69 -- # sed -n 1p 00:18:22.196 03:03:01 -- host/multipath.sh@69 -- # port=4421 00:18:22.196 03:03:01 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:22.196 03:03:01 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:22.196 03:03:01 -- host/multipath.sh@72 -- # kill 93037 00:18:22.196 03:03:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:22.196 03:03:01 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:22.196 03:03:01 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:18:22.197 03:03:01 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:18:22.455 03:03:01 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:22.455 03:03:01 -- host/multipath.sh@65 -- # dtrace_pid=93154 00:18:22.455 03:03:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:22.455 03:03:01 -- host/multipath.sh@66 -- # sleep 6 00:18:29.017 03:03:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:29.017 03:03:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:29.017 03:03:07 -- host/multipath.sh@67 -- # active_port= 00:18:29.017 03:03:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.017 Attaching 4 probes... 00:18:29.017 00:18:29.017 00:18:29.017 00:18:29.017 00:18:29.017 00:18:29.017 03:03:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:29.017 03:03:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:29.017 03:03:07 -- host/multipath.sh@69 -- # sed -n 1p 00:18:29.017 03:03:07 -- host/multipath.sh@69 -- # port= 00:18:29.017 03:03:07 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:29.017 03:03:07 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:29.017 03:03:07 -- host/multipath.sh@72 -- # kill 93154 00:18:29.017 03:03:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:29.017 03:03:07 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:29.017 03:03:07 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:18:29.017 03:03:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:29.275 03:03:08 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:29.275 03:03:08 -- host/multipath.sh@65 -- # dtrace_pid=93268 00:18:29.275 03:03:08 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:29.275 03:03:08 -- host/multipath.sh@66 -- # sleep 6 00:18:35.853 03:03:14 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:35.853 03:03:14 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:35.853 03:03:14 -- host/multipath.sh@67 -- # active_port=4421 00:18:35.853 03:03:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.853 Attaching 4 probes... 00:18:35.853 @path[10.0.0.2, 4421]: 15221 00:18:35.853 @path[10.0.0.2, 4421]: 15490 00:18:35.853 @path[10.0.0.2, 4421]: 15363 00:18:35.853 @path[10.0.0.2, 4421]: 15452 00:18:35.853 @path[10.0.0.2, 4421]: 15583 00:18:35.853 03:03:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:35.853 03:03:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:35.854 03:03:14 -- host/multipath.sh@69 -- # sed -n 1p 00:18:35.854 03:03:14 -- host/multipath.sh@69 -- # port=4421 00:18:35.854 03:03:14 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.854 03:03:14 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:35.854 03:03:14 -- host/multipath.sh@72 -- # kill 93268 00:18:35.854 03:03:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:35.854 03:03:14 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:35.854 [2024-04-23 03:03:14.853909] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.853993] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854022] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854046] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854054] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854062] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854071] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854079] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854096] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854104] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854112] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854120] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854128] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854144] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854152] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854160] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854182] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854209] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854259] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854268] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854277] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854285] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854333] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854345] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854354] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854362] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854371] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854423] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854440] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854448] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854457] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854466] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854475] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854483] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854492] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 [2024-04-23 03:03:14.854500] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13f7400 is same with the state(5) to be set 00:18:35.854 03:03:14 -- host/multipath.sh@101 -- # sleep 1 00:18:36.821 03:03:15 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:36.821 03:03:15 -- host/multipath.sh@65 -- # dtrace_pid=93391 00:18:36.821 03:03:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:36.821 03:03:15 -- host/multipath.sh@66 -- # sleep 6 00:18:43.383 03:03:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:43.383 03:03:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:43.383 03:03:22 -- host/multipath.sh@67 -- # active_port=4420 00:18:43.383 03:03:22 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.383 Attaching 4 probes... 00:18:43.383 @path[10.0.0.2, 4420]: 15283 00:18:43.383 @path[10.0.0.2, 4420]: 15583 00:18:43.383 @path[10.0.0.2, 4420]: 15566 00:18:43.383 @path[10.0.0.2, 4420]: 15580 00:18:43.383 @path[10.0.0.2, 4420]: 15671 00:18:43.383 03:03:22 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:43.383 03:03:22 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:43.383 03:03:22 -- host/multipath.sh@69 -- # sed -n 1p 00:18:43.383 03:03:22 -- host/multipath.sh@69 -- # port=4420 00:18:43.383 03:03:22 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:43.383 03:03:22 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:43.383 03:03:22 -- host/multipath.sh@72 -- # kill 93391 00:18:43.383 03:03:22 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:43.383 03:03:22 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:43.383 [2024-04-23 03:03:22.414184] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:43.383 03:03:22 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:43.641 03:03:22 -- host/multipath.sh@111 -- # sleep 6 00:18:50.199 03:03:28 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:50.199 03:03:28 -- host/multipath.sh@65 -- # dtrace_pid=93566 00:18:50.199 03:03:28 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 92721 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:50.199 03:03:28 -- host/multipath.sh@66 -- # sleep 6 00:18:56.773 03:03:34 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:56.773 03:03:34 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:56.773 03:03:35 -- host/multipath.sh@67 -- # active_port=4421 00:18:56.773 03:03:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.773 Attaching 4 probes... 00:18:56.773 @path[10.0.0.2, 4421]: 15218 00:18:56.773 @path[10.0.0.2, 4421]: 15639 00:18:56.773 @path[10.0.0.2, 4421]: 15360 00:18:56.773 @path[10.0.0.2, 4421]: 15520 00:18:56.773 @path[10.0.0.2, 4421]: 15616 00:18:56.773 03:03:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:56.773 03:03:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:56.773 03:03:35 -- host/multipath.sh@69 -- # sed -n 1p 00:18:56.773 03:03:35 -- host/multipath.sh@69 -- # port=4421 00:18:56.773 03:03:35 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:56.773 03:03:35 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:56.773 03:03:35 -- host/multipath.sh@72 -- # kill 93566 00:18:56.773 03:03:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:56.773 03:03:35 -- host/multipath.sh@114 -- # killprocess 92769 00:18:56.773 03:03:35 -- common/autotest_common.sh@936 -- # '[' -z 92769 ']' 00:18:56.773 03:03:35 -- common/autotest_common.sh@940 -- # kill -0 92769 00:18:56.773 03:03:35 -- common/autotest_common.sh@941 -- # uname 00:18:56.773 03:03:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.773 03:03:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92769 00:18:56.773 killing process with pid 92769 00:18:56.773 03:03:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:56.773 03:03:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:56.773 03:03:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92769' 00:18:56.773 03:03:35 -- common/autotest_common.sh@955 -- # kill 92769 00:18:56.773 03:03:35 -- common/autotest_common.sh@960 -- # wait 92769 00:18:56.773 Connection closed with partial response: 00:18:56.773 00:18:56.773 00:18:56.773 03:03:35 -- host/multipath.sh@116 -- # wait 92769 00:18:56.773 03:03:35 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:56.773 [2024-04-23 03:02:38.331777] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:18:56.773 [2024-04-23 03:02:38.331895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92769 ] 00:18:56.773 [2024-04-23 03:02:38.449985] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:56.773 [2024-04-23 03:02:38.465282] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.773 [2024-04-23 03:02:38.502318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.773 Running I/O for 90 seconds... 00:18:56.773 [2024-04-23 03:02:47.842421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:110336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.773 [2024-04-23 03:02:47.842790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.842826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.842863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.842924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.842961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.842982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.842997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:109744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:109752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:56.773 [2024-04-23 03:02:47.843505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.773 [2024-04-23 03:02:47.843520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.843741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:110392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:110400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:110408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:110424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.843964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:110432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.843979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:109928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:109936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.774 [2024-04-23 03:02:47.844667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:110456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:110480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:110488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:110496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.774 [2024-04-23 03:02:47.844975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.774 [2024-04-23 03:02:47.844997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:110176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:110184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.775 [2024-04-23 03:02:47.845896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.845937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.845959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.845974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:110576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:110584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:110624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:110632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.775 [2024-04-23 03:02:47.846926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.775 [2024-04-23 03:02:47.846948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.846963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.846984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.846999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:110216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:110232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:110240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:110256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:110280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.847600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.847615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:47.849138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:110648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:110656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:110664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:110672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:110688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:110696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:47.849464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:47.849484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.416812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:54.416886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.416951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:54.416967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:54.417033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.776 [2024-04-23 03:02:54.417084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.776 [2024-04-23 03:02:54.417533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.776 [2024-04-23 03:02:54.417553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.417809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.417844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.417878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.417912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.417947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.417971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.417986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.777 [2024-04-23 03:02:54.418761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.777 [2024-04-23 03:02:54.418862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.777 [2024-04-23 03:02:54.418882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.418896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.418948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.418961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.418982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.419865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.419901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.419938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.419960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.778 [2024-04-23 03:02:54.420889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.420929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.420966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.420987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.421001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.421022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.778 [2024-04-23 03:02:54.421036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:56.778 [2024-04-23 03:02:54.421057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.421087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.421121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.421204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.421271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.421984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.421998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.779 [2024-04-23 03:02:54.422315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.779 [2024-04-23 03:02:54.422713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:56.779 [2024-04-23 03:02:54.422735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.422749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.422770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.422784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.422805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.422819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.422843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.422863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.422916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.422930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.422971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.422988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.780 [2024-04-23 03:02:54.423814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.423962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.423993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.780 [2024-04-23 03:02:54.424461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.780 [2024-04-23 03:02:54.424482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.424824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.424845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.424859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:26328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:26336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:26344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:26352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:26360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:26368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:26384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:26400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:26408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.781 [2024-04-23 03:02:54.427875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.427910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:25736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.427945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.427967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.427981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:25808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:25832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.781 [2024-04-23 03:02:54.428444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.781 [2024-04-23 03:02:54.428458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:25864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.428757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.428779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:25912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:26432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:26440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:26456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:26480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:26488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.443639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.443950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.443981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:25976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:26000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:26008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:26016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:26024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:26040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.782 [2024-04-23 03:02:54.444530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.444581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:26504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.444631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:56.782 [2024-04-23 03:02:54.444662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:26512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.782 [2024-04-23 03:02:54.444682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:26520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.444733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:26528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.444804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.444863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:26544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.444914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.444966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.444996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:26048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:26056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:26064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:26072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:26080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:26088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:26096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:26104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:26120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:26128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:26136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:26152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:26168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.445913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.445950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:26176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.445970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:26184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.446028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:26192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.446079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:26200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.783 [2024-04-23 03:02:54.446145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:25536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:25560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:25600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:56.783 [2024-04-23 03:02:54.446896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:25640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.783 [2024-04-23 03:02:54.446927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.446958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.447028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:26216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:26224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:26232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:26248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:26264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:26272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:26288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:26312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.447793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.447844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:25672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.447904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.447935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.447976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.448007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.448036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.448078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:25696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.448107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.448185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.448207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.448237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:25712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.448258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.448299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.784 [2024-04-23 03:02:54.448474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:02:54.449177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:26320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:02:54.449216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:15296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:15320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:15336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.784 [2024-04-23 03:03:01.520824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:56.784 [2024-04-23 03:03:01.520845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.520859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.520880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.520895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.520917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.520932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.520954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.520969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.520991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:15064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.521286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:15440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.521969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.521983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:15568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:15112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:15136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.785 [2024-04-23 03:03:01.522456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:56.785 [2024-04-23 03:03:01.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.785 [2024-04-23 03:03:01.522530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:15592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:15632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:15664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.522970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.522992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.786 [2024-04-23 03:03:01.523813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:15728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:15736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.523963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.523985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.524000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.524022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.524037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.524059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.524084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.524108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.786 [2024-04-23 03:03:01.524123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:56.786 [2024-04-23 03:03:01.524163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.524509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.524762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.524777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:01.525608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:15888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.525937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.525982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:15928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:15936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:16000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:01.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.787 [2024-04-23 03:03:01.526520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:14.854566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:14.854615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.787 [2024-04-23 03:03:14.854641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.787 [2024-04-23 03:03:14.854656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.854672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.854686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.788 [2024-04-23 03:03:14.855735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.855982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.855996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.788 [2024-04-23 03:03:14.856264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.788 [2024-04-23 03:03:14.856278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:13288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.856691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.856973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.856987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.789 [2024-04-23 03:03:14.857172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:13344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.789 [2024-04-23 03:03:14.857461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.789 [2024-04-23 03:03:14.857475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:13408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.857660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.857966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.857980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.858015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.858044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.858074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.858103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.790 [2024-04-23 03:03:14.858145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.790 [2024-04-23 03:03:14.858541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.790 [2024-04-23 03:03:14.858555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.791 [2024-04-23 03:03:14.858583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.791 [2024-04-23 03:03:14.858612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:56.791 [2024-04-23 03:03:14.858641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:56.791 [2024-04-23 03:03:14.858853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18c5a60 is same with the state(5) to be set 00:18:56.791 [2024-04-23 03:03:14.858885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:56.791 [2024-04-23 03:03:14.858896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:56.791 [2024-04-23 03:03:14.858907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13176 len:8 PRP1 0x0 PRP2 0x0 00:18:56.791 [2024-04-23 03:03:14.858920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.858966] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18c5a60 was disconnected and freed. reset controller. 00:18:56.791 [2024-04-23 03:03:14.859093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.791 [2024-04-23 03:03:14.859119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.859134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.791 [2024-04-23 03:03:14.859163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.859194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.791 [2024-04-23 03:03:14.859207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.859221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.791 [2024-04-23 03:03:14.859235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.791 [2024-04-23 03:03:14.859249] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb4f0 is same with the state(5) to be set 00:18:56.791 [2024-04-23 03:03:14.860335] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.791 [2024-04-23 03:03:14.860372] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb4f0 (9): Bad file descriptor 00:18:56.791 [2024-04-23 03:03:14.860691] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.791 [2024-04-23 03:03:14.860765] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.791 [2024-04-23 03:03:14.860815] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:56.791 [2024-04-23 03:03:14.860837] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cb4f0 with addr=10.0.0.2, port=4421 00:18:56.791 [2024-04-23 03:03:14.860852] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18cb4f0 is same with the state(5) to be set 00:18:56.791 [2024-04-23 03:03:14.860897] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18cb4f0 (9): Bad file descriptor 00:18:56.791 [2024-04-23 03:03:14.860928] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:56.791 [2024-04-23 03:03:14.860946] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:56.791 [2024-04-23 03:03:14.860961] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:56.791 [2024-04-23 03:03:14.860993] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:56.791 [2024-04-23 03:03:14.861009] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:56.791 [2024-04-23 03:03:24.921136] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:56.791 Received shutdown signal, test time was about 55.415344 seconds 00:18:56.791 00:18:56.791 Latency(us) 00:18:56.791 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.791 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.791 Verification LBA range: start 0x0 length 0x4000 00:18:56.791 Nvme0n1 : 55.41 6714.76 26.23 0.00 0.00 19026.78 389.12 7046430.72 00:18:56.791 =================================================================================================================== 00:18:56.791 Total : 6714.76 26.23 0.00 0.00 19026.78 389.12 7046430.72 00:18:56.791 03:03:35 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:56.791 03:03:35 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:56.791 03:03:35 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:56.791 03:03:35 -- host/multipath.sh@125 -- # nvmftestfini 00:18:56.791 03:03:35 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:56.791 03:03:35 -- nvmf/common.sh@117 -- # sync 00:18:56.791 03:03:35 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:56.791 03:03:35 -- nvmf/common.sh@120 -- # set +e 00:18:56.791 03:03:35 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:56.791 03:03:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:56.791 rmmod nvme_tcp 00:18:56.791 rmmod nvme_fabrics 00:18:56.791 rmmod nvme_keyring 00:18:56.791 03:03:35 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:56.791 03:03:35 -- nvmf/common.sh@124 -- # set -e 00:18:56.791 03:03:35 -- nvmf/common.sh@125 -- # return 0 00:18:56.791 03:03:35 -- nvmf/common.sh@478 -- # '[' -n 92721 ']' 00:18:56.791 03:03:35 -- nvmf/common.sh@479 -- # killprocess 92721 00:18:56.791 03:03:35 -- common/autotest_common.sh@936 -- # '[' -z 92721 ']' 00:18:56.791 03:03:35 -- common/autotest_common.sh@940 -- # kill -0 92721 00:18:56.791 03:03:35 -- common/autotest_common.sh@941 -- # uname 00:18:56.791 03:03:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:56.791 03:03:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92721 00:18:56.791 03:03:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:56.791 03:03:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:56.791 killing process with pid 92721 00:18:56.791 03:03:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92721' 00:18:56.791 03:03:35 -- common/autotest_common.sh@955 -- # kill 92721 00:18:56.791 03:03:35 -- common/autotest_common.sh@960 -- # wait 92721 00:18:56.791 03:03:35 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:56.791 03:03:35 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:56.791 03:03:35 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:56.791 03:03:35 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:56.791 03:03:35 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:56.791 03:03:35 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:56.791 03:03:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:56.791 03:03:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:56.791 03:03:35 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:18:56.791 00:18:56.791 real 1m0.033s 00:18:56.791 user 2m46.935s 00:18:56.791 sys 0m17.921s 00:18:56.791 03:03:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.791 03:03:35 -- common/autotest_common.sh@10 -- # set +x 00:18:56.791 ************************************ 00:18:56.791 END TEST nvmf_multipath 00:18:56.791 ************************************ 00:18:56.791 03:03:35 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:56.791 03:03:35 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:56.791 03:03:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.791 03:03:35 -- common/autotest_common.sh@10 -- # set +x 00:18:57.051 ************************************ 00:18:57.051 START TEST nvmf_timeout 00:18:57.051 ************************************ 00:18:57.051 03:03:35 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:57.051 * Looking for test storage... 00:18:57.051 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:57.051 03:03:36 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:57.051 03:03:36 -- nvmf/common.sh@7 -- # uname -s 00:18:57.051 03:03:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:57.051 03:03:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:57.051 03:03:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:57.051 03:03:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:57.051 03:03:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:57.051 03:03:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:57.051 03:03:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:57.051 03:03:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:57.051 03:03:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:57.051 03:03:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:18:57.051 03:03:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:18:57.051 03:03:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:57.051 03:03:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:57.051 03:03:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:57.051 03:03:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:57.051 03:03:36 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:57.051 03:03:36 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:57.051 03:03:36 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:57.051 03:03:36 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:57.051 03:03:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.051 03:03:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.051 03:03:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.051 03:03:36 -- paths/export.sh@5 -- # export PATH 00:18:57.051 03:03:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:57.051 03:03:36 -- nvmf/common.sh@47 -- # : 0 00:18:57.051 03:03:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:57.051 03:03:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:57.051 03:03:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:57.051 03:03:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:57.051 03:03:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:57.051 03:03:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:57.051 03:03:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:57.051 03:03:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:57.051 03:03:36 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:57.051 03:03:36 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:57.051 03:03:36 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:57.051 03:03:36 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:57.051 03:03:36 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:57.051 03:03:36 -- host/timeout.sh@19 -- # nvmftestinit 00:18:57.051 03:03:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:57.051 03:03:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:57.051 03:03:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:57.051 03:03:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:57.051 03:03:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:57.051 03:03:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.051 03:03:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.051 03:03:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:57.051 03:03:36 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:18:57.051 03:03:36 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:18:57.051 03:03:36 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:57.051 03:03:36 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:57.051 03:03:36 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:57.051 03:03:36 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:18:57.051 03:03:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:57.051 03:03:36 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:57.051 03:03:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:57.051 03:03:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:57.051 03:03:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:57.051 03:03:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:57.051 03:03:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:57.051 03:03:36 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:57.052 03:03:36 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:18:57.052 03:03:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:18:57.052 Cannot find device "nvmf_tgt_br" 00:18:57.052 03:03:36 -- nvmf/common.sh@155 -- # true 00:18:57.052 03:03:36 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:18:57.052 Cannot find device "nvmf_tgt_br2" 00:18:57.052 03:03:36 -- nvmf/common.sh@156 -- # true 00:18:57.052 03:03:36 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:18:57.052 03:03:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:18:57.052 Cannot find device "nvmf_tgt_br" 00:18:57.052 03:03:36 -- nvmf/common.sh@158 -- # true 00:18:57.052 03:03:36 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:18:57.052 Cannot find device "nvmf_tgt_br2" 00:18:57.052 03:03:36 -- nvmf/common.sh@159 -- # true 00:18:57.052 03:03:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:18:57.052 03:03:36 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:18:57.310 03:03:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:57.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.310 03:03:36 -- nvmf/common.sh@162 -- # true 00:18:57.310 03:03:36 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:57.310 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:57.310 03:03:36 -- nvmf/common.sh@163 -- # true 00:18:57.310 03:03:36 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:18:57.310 03:03:36 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:57.310 03:03:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:57.310 03:03:36 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:57.310 03:03:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:57.310 03:03:36 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:57.310 03:03:36 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:57.310 03:03:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:57.310 03:03:36 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:57.310 03:03:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:18:57.310 03:03:36 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:18:57.310 03:03:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:18:57.310 03:03:36 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:18:57.310 03:03:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:57.310 03:03:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:57.310 03:03:36 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:57.310 03:03:36 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:18:57.310 03:03:36 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:18:57.310 03:03:36 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:18:57.310 03:03:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:57.310 03:03:36 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:57.311 03:03:36 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:57.311 03:03:36 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:57.311 03:03:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:18:57.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:57.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:18:57.311 00:18:57.311 --- 10.0.0.2 ping statistics --- 00:18:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.311 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:18:57.311 03:03:36 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:18:57.311 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:57.311 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:18:57.311 00:18:57.311 --- 10.0.0.3 ping statistics --- 00:18:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.311 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:18:57.311 03:03:36 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:57.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:57.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:18:57.311 00:18:57.311 --- 10.0.0.1 ping statistics --- 00:18:57.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:57.311 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:57.311 03:03:36 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:57.311 03:03:36 -- nvmf/common.sh@422 -- # return 0 00:18:57.311 03:03:36 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:57.311 03:03:36 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:57.311 03:03:36 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:57.311 03:03:36 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:57.311 03:03:36 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:57.311 03:03:36 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:57.311 03:03:36 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:57.311 03:03:36 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:57.311 03:03:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:57.311 03:03:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:57.311 03:03:36 -- common/autotest_common.sh@10 -- # set +x 00:18:57.311 03:03:36 -- nvmf/common.sh@470 -- # nvmfpid=93879 00:18:57.311 03:03:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:57.311 03:03:36 -- nvmf/common.sh@471 -- # waitforlisten 93879 00:18:57.311 03:03:36 -- common/autotest_common.sh@817 -- # '[' -z 93879 ']' 00:18:57.311 03:03:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.311 03:03:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:57.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.311 03:03:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.311 03:03:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:57.311 03:03:36 -- common/autotest_common.sh@10 -- # set +x 00:18:57.570 [2024-04-23 03:03:36.525485] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:18:57.570 [2024-04-23 03:03:36.525624] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:57.570 [2024-04-23 03:03:36.653177] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:57.570 [2024-04-23 03:03:36.667895] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.570 [2024-04-23 03:03:36.708694] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:57.570 [2024-04-23 03:03:36.708769] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:57.570 [2024-04-23 03:03:36.708794] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:57.570 [2024-04-23 03:03:36.708804] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:57.570 [2024-04-23 03:03:36.708813] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:57.570 [2024-04-23 03:03:36.708997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.570 [2024-04-23 03:03:36.709011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.828 03:03:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:57.828 03:03:36 -- common/autotest_common.sh@850 -- # return 0 00:18:57.828 03:03:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:57.828 03:03:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:57.828 03:03:36 -- common/autotest_common.sh@10 -- # set +x 00:18:57.828 03:03:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:57.828 03:03:36 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:57.828 03:03:36 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:58.088 [2024-04-23 03:03:37.078053] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:58.088 03:03:37 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:58.346 Malloc0 00:18:58.346 03:03:37 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:58.605 03:03:37 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:58.864 03:03:37 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:59.122 [2024-04-23 03:03:38.081729] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:59.122 03:03:38 -- host/timeout.sh@32 -- # bdevperf_pid=93921 00:18:59.122 03:03:38 -- host/timeout.sh@34 -- # waitforlisten 93921 /var/tmp/bdevperf.sock 00:18:59.122 03:03:38 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:59.122 03:03:38 -- common/autotest_common.sh@817 -- # '[' -z 93921 ']' 00:18:59.122 03:03:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.122 03:03:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:59.122 03:03:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.122 03:03:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:59.122 03:03:38 -- common/autotest_common.sh@10 -- # set +x 00:18:59.122 [2024-04-23 03:03:38.157672] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:18:59.122 [2024-04-23 03:03:38.157804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93921 ] 00:18:59.381 [2024-04-23 03:03:38.282906] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:18:59.381 [2024-04-23 03:03:38.300250] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.381 [2024-04-23 03:03:38.333716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.949 03:03:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:59.949 03:03:39 -- common/autotest_common.sh@850 -- # return 0 00:18:59.949 03:03:39 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:00.208 03:03:39 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:00.467 NVMe0n1 00:19:00.467 03:03:39 -- host/timeout.sh@51 -- # rpc_pid=93940 00:19:00.467 03:03:39 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:00.467 03:03:39 -- host/timeout.sh@53 -- # sleep 1 00:19:00.725 Running I/O for 10 seconds... 00:19:01.660 03:03:40 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:01.921 [2024-04-23 03:03:40.836316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.921 [2024-04-23 03:03:40.836368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.836659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.836801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.837973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.837985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.838362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.838394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.921 [2024-04-23 03:03:40.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.921 [2024-04-23 03:03:40.838418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.838691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.838955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.839978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.839989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.922 [2024-04-23 03:03:40.840177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.922 [2024-04-23 03:03:40.840187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.840590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.840602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.923 [2024-04-23 03:03:40.841469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.923 [2024-04-23 03:03:40.841555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.923 [2024-04-23 03:03:40.841566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:01.924 [2024-04-23 03:03:40.841662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:01.924 [2024-04-23 03:03:40.841796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2350810 is same with the state(5) to be set 00:19:01.924 [2024-04-23 03:03:40.841822] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:01.924 [2024-04-23 03:03:40.841830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:01.924 [2024-04-23 03:03:40.841839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57680 len:8 PRP1 0x0 PRP2 0x0 00:19:01.924 [2024-04-23 03:03:40.841849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.924 [2024-04-23 03:03:40.841891] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2350810 was disconnected and freed. reset controller. 00:19:01.924 [2024-04-23 03:03:40.842160] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.924 [2024-04-23 03:03:40.842240] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355750 (9): Bad file descriptor 00:19:01.924 [2024-04-23 03:03:40.842343] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:01.924 [2024-04-23 03:03:40.842407] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:01.924 [2024-04-23 03:03:40.842450] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:01.924 [2024-04-23 03:03:40.842467] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2355750 with addr=10.0.0.2, port=4420 00:19:01.924 [2024-04-23 03:03:40.842478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2355750 is same with the state(5) to be set 00:19:01.924 [2024-04-23 03:03:40.842498] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355750 (9): Bad file descriptor 00:19:01.924 [2024-04-23 03:03:40.842514] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:01.924 [2024-04-23 03:03:40.842524] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:01.924 [2024-04-23 03:03:40.842535] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:01.924 [2024-04-23 03:03:40.842555] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:01.924 [2024-04-23 03:03:40.842567] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:01.924 03:03:40 -- host/timeout.sh@56 -- # sleep 2 00:19:03.826 [2024-04-23 03:03:42.842719] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.826 [2024-04-23 03:03:42.843276] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.826 [2024-04-23 03:03:42.843600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:03.826 [2024-04-23 03:03:42.843833] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2355750 with addr=10.0.0.2, port=4420 00:19:03.826 [2024-04-23 03:03:42.844254] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2355750 is same with the state(5) to be set 00:19:03.826 [2024-04-23 03:03:42.844687] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355750 (9): Bad file descriptor 00:19:03.826 [2024-04-23 03:03:42.845117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:03.826 [2024-04-23 03:03:42.845531] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:03.826 [2024-04-23 03:03:42.845928] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:03.826 [2024-04-23 03:03:42.846174] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:03.826 [2024-04-23 03:03:42.846406] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:03.826 03:03:42 -- host/timeout.sh@57 -- # get_controller 00:19:03.826 03:03:42 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:03.826 03:03:42 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:04.085 03:03:43 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:19:04.085 03:03:43 -- host/timeout.sh@58 -- # get_bdev 00:19:04.085 03:03:43 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:04.085 03:03:43 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:04.342 03:03:43 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:19:04.342 03:03:43 -- host/timeout.sh@61 -- # sleep 5 00:19:05.718 [2024-04-23 03:03:44.846598] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.718 [2024-04-23 03:03:44.846720] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.718 [2024-04-23 03:03:44.846772] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:05.718 [2024-04-23 03:03:44.846790] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2355750 with addr=10.0.0.2, port=4420 00:19:05.718 [2024-04-23 03:03:44.846804] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2355750 is same with the state(5) to be set 00:19:05.718 [2024-04-23 03:03:44.846833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2355750 (9): Bad file descriptor 00:19:05.718 [2024-04-23 03:03:44.846874] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:05.718 [2024-04-23 03:03:44.846890] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:05.718 [2024-04-23 03:03:44.846902] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:05.718 [2024-04-23 03:03:44.846984] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:05.718 [2024-04-23 03:03:44.847001] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:08.255 [2024-04-23 03:03:46.847098] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:08.822 00:19:08.822 Latency(us) 00:19:08.822 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.822 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:08.822 Verification LBA range: start 0x0 length 0x4000 00:19:08.822 NVMe0n1 : 8.17 880.20 3.44 15.67 0.00 142617.89 4557.73 7015926.69 00:19:08.822 =================================================================================================================== 00:19:08.822 Total : 880.20 3.44 15.67 0.00 142617.89 4557.73 7015926.69 00:19:08.822 0 00:19:09.388 03:03:48 -- host/timeout.sh@62 -- # get_controller 00:19:09.388 03:03:48 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:09.388 03:03:48 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:19:09.647 03:03:48 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:19:09.647 03:03:48 -- host/timeout.sh@63 -- # get_bdev 00:19:09.647 03:03:48 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:19:09.647 03:03:48 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:19:09.905 03:03:48 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:19:09.905 03:03:48 -- host/timeout.sh@65 -- # wait 93940 00:19:09.905 03:03:48 -- host/timeout.sh@67 -- # killprocess 93921 00:19:09.906 03:03:48 -- common/autotest_common.sh@936 -- # '[' -z 93921 ']' 00:19:09.906 03:03:48 -- common/autotest_common.sh@940 -- # kill -0 93921 00:19:09.906 03:03:48 -- common/autotest_common.sh@941 -- # uname 00:19:09.906 03:03:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:09.906 03:03:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93921 00:19:09.906 killing process with pid 93921 00:19:09.906 Received shutdown signal, test time was about 9.321691 seconds 00:19:09.906 00:19:09.906 Latency(us) 00:19:09.906 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.906 =================================================================================================================== 00:19:09.906 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.906 03:03:48 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:09.906 03:03:48 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:09.906 03:03:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93921' 00:19:09.906 03:03:48 -- common/autotest_common.sh@955 -- # kill 93921 00:19:09.906 03:03:48 -- common/autotest_common.sh@960 -- # wait 93921 00:19:10.164 03:03:49 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.523 [2024-04-23 03:03:49.400574] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:10.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.523 03:03:49 -- host/timeout.sh@74 -- # bdevperf_pid=94066 00:19:10.523 03:03:49 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:19:10.523 03:03:49 -- host/timeout.sh@76 -- # waitforlisten 94066 /var/tmp/bdevperf.sock 00:19:10.523 03:03:49 -- common/autotest_common.sh@817 -- # '[' -z 94066 ']' 00:19:10.523 03:03:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.524 03:03:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:10.524 03:03:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.524 03:03:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:10.524 03:03:49 -- common/autotest_common.sh@10 -- # set +x 00:19:10.524 [2024-04-23 03:03:49.471049] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:19:10.524 [2024-04-23 03:03:49.471352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94066 ] 00:19:10.524 [2024-04-23 03:03:49.593699] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:10.524 [2024-04-23 03:03:49.614168] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.524 [2024-04-23 03:03:49.650004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.813 03:03:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.813 03:03:49 -- common/autotest_common.sh@850 -- # return 0 00:19:10.813 03:03:49 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:11.071 03:03:49 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:19:11.330 NVMe0n1 00:19:11.330 03:03:50 -- host/timeout.sh@84 -- # rpc_pid=94077 00:19:11.330 03:03:50 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:11.330 03:03:50 -- host/timeout.sh@86 -- # sleep 1 00:19:11.330 Running I/O for 10 seconds... 00:19:12.265 03:03:51 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:12.526 [2024-04-23 03:03:51.544872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.526 [2024-04-23 03:03:51.544922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.526 [2024-04-23 03:03:51.544937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.526 [2024-04-23 03:03:51.544946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.544957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.527 [2024-04-23 03:03:51.544966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.544977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:12.527 [2024-04-23 03:03:51.544986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.544995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:12.527 [2024-04-23 03:03:51.545270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:57504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.527 [2024-04-23 03:03:51.545289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:57640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:57648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:57656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:57664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:57672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:57680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:57688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:57696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:57704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:57712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:57728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:57736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:57744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.545982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.545993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:57752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:57760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:57768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:57776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:57784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:57792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:57800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:57808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:57816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:57824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:57832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:57840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:57848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:57856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:57864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:57872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:57888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:57896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.527 [2024-04-23 03:03:51.546424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:57912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.527 [2024-04-23 03:03:51.546434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:57920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:57928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:57944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:57952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:57968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:57976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:57984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:57992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:58000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:58008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:58016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:58024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:58032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:58040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:58048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:58056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:58064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:58072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:58080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:58088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:58096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:58104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.546950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.546962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:58120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:58136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:58144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:58152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:58160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:58168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:58176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:58184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:58192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:58200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:58208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:58216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:58224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.528 [2024-04-23 03:03:51.547977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:58232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.528 [2024-04-23 03:03:51.547986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.547998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:58240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:58248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:58256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:58264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:58272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:58280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:58288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:58296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:58304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:58312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:58320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:58344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:58352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:58368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:58376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:58384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.548987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.548999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.529 [2024-04-23 03:03:51.549111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.529 [2024-04-23 03:03:51.549146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.529 [2024-04-23 03:03:51.549169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.529 [2024-04-23 03:03:51.549191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.529 [2024-04-23 03:03:51.549212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.529 [2024-04-23 03:03:51.549234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.529 [2024-04-23 03:03:51.549246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:12.530 [2024-04-23 03:03:51.549444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:12.530 [2024-04-23 03:03:51.549465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a67930 is same with the state(5) to be set 00:19:12.530 [2024-04-23 03:03:51.549488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:12.530 [2024-04-23 03:03:51.549496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:12.530 [2024-04-23 03:03:51.549505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:58520 len:8 PRP1 0x0 PRP2 0x0 00:19:12.530 [2024-04-23 03:03:51.549514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.530 [2024-04-23 03:03:51.549555] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a67930 was disconnected and freed. reset controller. 00:19:12.530 [2024-04-23 03:03:51.549808] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.530 [2024-04-23 03:03:51.549833] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:12.530 [2024-04-23 03:03:51.549927] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.530 [2024-04-23 03:03:51.549993] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.530 [2024-04-23 03:03:51.550035] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.530 [2024-04-23 03:03:51.550051] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:12.530 [2024-04-23 03:03:51.550062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:12.530 [2024-04-23 03:03:51.550081] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:12.530 [2024-04-23 03:03:51.550098] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:12.530 [2024-04-23 03:03:51.550110] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:12.530 [2024-04-23 03:03:51.550121] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:12.530 [2024-04-23 03:03:51.550904] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:12.530 [2024-04-23 03:03:51.551121] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:12.530 03:03:51 -- host/timeout.sh@90 -- # sleep 1 00:19:13.464 [2024-04-23 03:03:52.551759] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.464 [2024-04-23 03:03:52.552233] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.464 [2024-04-23 03:03:52.552517] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.464 [2024-04-23 03:03:52.552745] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:13.464 [2024-04-23 03:03:52.553158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:13.464 [2024-04-23 03:03:52.553402] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:13.464 [2024-04-23 03:03:52.553429] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.464 [2024-04-23 03:03:52.553439] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:13.464 [2024-04-23 03:03:52.553450] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.464 [2024-04-23 03:03:52.553477] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:13.464 [2024-04-23 03:03:52.553489] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.464 03:03:52 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.722 [2024-04-23 03:03:52.813432] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.722 03:03:52 -- host/timeout.sh@92 -- # wait 94077 00:19:14.656 [2024-04-23 03:03:53.571787] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:22.774 00:19:22.774 Latency(us) 00:19:22.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.774 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:22.774 Verification LBA range: start 0x0 length 0x4000 00:19:22.774 NVMe0n1 : 10.01 5617.75 21.94 0.00 0.00 22729.63 1757.56 3035150.89 00:19:22.774 =================================================================================================================== 00:19:22.774 Total : 5617.75 21.94 0.00 0.00 22729.63 1757.56 3035150.89 00:19:22.774 0 00:19:22.774 03:04:00 -- host/timeout.sh@97 -- # rpc_pid=94189 00:19:22.774 03:04:00 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:22.774 03:04:00 -- host/timeout.sh@98 -- # sleep 1 00:19:22.774 Running I/O for 10 seconds... 00:19:22.774 03:04:01 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.774 [2024-04-23 03:04:01.698957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:58392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:58400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:58416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:58424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:58432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.774 [2024-04-23 03:04:01.699261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:58448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.774 [2024-04-23 03:04:01.699271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:58456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:58464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:58472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:58480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:58488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:58496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:58504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.699416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:57512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:57520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:57536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:57552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:57560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:57568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:57576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.699647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.699657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:57600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:57608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:57616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:57624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:58512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.700220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:58520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.700241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:57632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:57640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:57648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:57664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:57672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.775 [2024-04-23 03:04:01.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.775 [2024-04-23 03:04:01.700399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:58528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:22.775 [2024-04-23 03:04:01.700408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:57696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:57704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:57720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:57736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:57744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:57752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:57760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:57776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:57784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:57792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:57800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:57808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:57816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:57824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:57832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:57840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:57848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:57856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:57864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:57872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:57880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:57896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.700983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:57904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.700993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:57912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:57920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:57928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:57936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:57944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:57952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.701735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:57960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.701882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.702368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:57968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.702838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.703265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:57976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.703572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.703592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:57984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.776 [2024-04-23 03:04:01.703602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.776 [2024-04-23 03:04:01.703614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:57992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:58016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:58048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:58064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:58104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:58128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.703990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.703999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:58144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:58176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:58208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:58240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:58264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:58272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.777 [2024-04-23 03:04:01.704410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.777 [2024-04-23 03:04:01.704422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:58296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:58304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:58320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:58336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:58344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:58352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.778 [2024-04-23 03:04:01.704638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704649] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a9fdc0 is same with the state(5) to be set 00:19:22.778 [2024-04-23 03:04:01.704662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:22.778 [2024-04-23 03:04:01.704670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:22.778 [2024-04-23 03:04:01.704679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58384 len:8 PRP1 0x0 PRP2 0x0 00:19:22.778 [2024-04-23 03:04:01.704688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704730] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a9fdc0 was disconnected and freed. reset controller. 00:19:22.778 [2024-04-23 03:04:01.704830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.778 [2024-04-23 03:04:01.704847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.778 [2024-04-23 03:04:01.704867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.778 [2024-04-23 03:04:01.704886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.778 [2024-04-23 03:04:01.704914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.778 [2024-04-23 03:04:01.704923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:22.778 [2024-04-23 03:04:01.705157] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.778 [2024-04-23 03:04:01.705181] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:22.778 [2024-04-23 03:04:01.705278] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.778 [2024-04-23 03:04:01.705331] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.778 [2024-04-23 03:04:01.705373] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.778 [2024-04-23 03:04:01.705389] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:22.778 [2024-04-23 03:04:01.705401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:22.778 [2024-04-23 03:04:01.705420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:22.778 [2024-04-23 03:04:01.705436] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:22.778 [2024-04-23 03:04:01.705446] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:22.778 [2024-04-23 03:04:01.705456] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:22.778 [2024-04-23 03:04:01.705476] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:22.778 [2024-04-23 03:04:01.705487] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:22.778 03:04:01 -- host/timeout.sh@101 -- # sleep 3 00:19:23.712 [2024-04-23 03:04:02.705606] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.712 [2024-04-23 03:04:02.706062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.712 [2024-04-23 03:04:02.706363] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:23.712 [2024-04-23 03:04:02.706590] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:23.712 [2024-04-23 03:04:02.706990] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:23.712 [2024-04-23 03:04:02.707420] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:23.712 [2024-04-23 03:04:02.707859] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:23.712 [2024-04-23 03:04:02.708271] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:23.712 [2024-04-23 03:04:02.708669] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:23.712 [2024-04-23 03:04:02.708908] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:23.712 [2024-04-23 03:04:02.709142] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:24.646 [2024-04-23 03:04:03.709742] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.646 [2024-04-23 03:04:03.710187] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.646 [2024-04-23 03:04:03.710470] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.646 [2024-04-23 03:04:03.710698] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:24.646 [2024-04-23 03:04:03.711098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:24.646 [2024-04-23 03:04:03.711573] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:24.646 [2024-04-23 03:04:03.712004] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:24.646 [2024-04-23 03:04:03.712424] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:24.646 [2024-04-23 03:04:03.712823] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:24.646 [2024-04-23 03:04:03.712865] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:24.646 [2024-04-23 03:04:03.712879] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.580 [2024-04-23 03:04:04.713345] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.580 [2024-04-23 03:04:04.713733] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.580 [2024-04-23 03:04:04.714041] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:25.580 [2024-04-23 03:04:04.714285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a6c750 with addr=10.0.0.2, port=4420 00:19:25.580 [2024-04-23 03:04:04.714743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a6c750 is same with the state(5) to be set 00:19:25.580 [2024-04-23 03:04:04.715549] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a6c750 (9): Bad file descriptor 00:19:25.580 [2024-04-23 03:04:04.716285] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:25.580 [2024-04-23 03:04:04.716532] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:25.580 [2024-04-23 03:04:04.716567] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:25.580 [2024-04-23 03:04:04.721283] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:25.580 [2024-04-23 03:04:04.721318] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:25.580 03:04:04 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:25.838 [2024-04-23 03:04:04.981575] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:26.096 03:04:04 -- host/timeout.sh@103 -- # wait 94189 00:19:26.664 [2024-04-23 03:04:05.756233] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:31.959 00:19:31.959 Latency(us) 00:19:31.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.959 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.959 Verification LBA range: start 0x0 length 0x4000 00:19:31.959 NVMe0n1 : 10.01 4748.24 18.55 3330.48 0.00 15805.19 659.08 3019898.88 00:19:31.959 =================================================================================================================== 00:19:31.959 Total : 4748.24 18.55 3330.48 0.00 15805.19 0.00 3019898.88 00:19:31.959 0 00:19:31.959 03:04:10 -- host/timeout.sh@105 -- # killprocess 94066 00:19:31.959 03:04:10 -- common/autotest_common.sh@936 -- # '[' -z 94066 ']' 00:19:31.959 03:04:10 -- common/autotest_common.sh@940 -- # kill -0 94066 00:19:31.959 03:04:10 -- common/autotest_common.sh@941 -- # uname 00:19:31.959 03:04:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:31.959 03:04:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94066 00:19:31.959 killing process with pid 94066 00:19:31.959 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.959 00:19:31.959 Latency(us) 00:19:31.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.959 =================================================================================================================== 00:19:31.959 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.959 03:04:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:31.959 03:04:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:31.959 03:04:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94066' 00:19:31.959 03:04:10 -- common/autotest_common.sh@955 -- # kill 94066 00:19:31.959 03:04:10 -- common/autotest_common.sh@960 -- # wait 94066 00:19:31.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:31.959 03:04:10 -- host/timeout.sh@110 -- # bdevperf_pid=94298 00:19:31.959 03:04:10 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:31.959 03:04:10 -- host/timeout.sh@112 -- # waitforlisten 94298 /var/tmp/bdevperf.sock 00:19:31.959 03:04:10 -- common/autotest_common.sh@817 -- # '[' -z 94298 ']' 00:19:31.959 03:04:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:31.959 03:04:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.959 03:04:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:31.959 03:04:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.959 03:04:10 -- common/autotest_common.sh@10 -- # set +x 00:19:31.960 [2024-04-23 03:04:10.809245] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:19:31.960 [2024-04-23 03:04:10.809581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94298 ] 00:19:31.960 [2024-04-23 03:04:10.937600] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:31.960 [2024-04-23 03:04:10.958379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.960 [2024-04-23 03:04:10.993664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.960 03:04:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:31.960 03:04:11 -- common/autotest_common.sh@850 -- # return 0 00:19:31.960 03:04:11 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94298 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:31.960 03:04:11 -- host/timeout.sh@116 -- # dtrace_pid=94306 00:19:31.960 03:04:11 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:32.218 03:04:11 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:32.786 NVMe0n1 00:19:32.786 03:04:11 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:32.786 03:04:11 -- host/timeout.sh@124 -- # rpc_pid=94348 00:19:32.786 03:04:11 -- host/timeout.sh@125 -- # sleep 1 00:19:32.786 Running I/O for 10 seconds... 00:19:33.721 03:04:12 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:33.983 [2024-04-23 03:04:12.928064] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928119] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928180] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928214] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928223] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928231] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928247] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928256] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928265] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928281] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928289] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928298] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928306] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928314] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928322] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928339] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928347] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928355] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928363] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928372] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928381] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928389] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928397] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928406] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928414] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928422] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928430] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928438] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928446] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928455] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928464] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.983 [2024-04-23 03:04:12.928473] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928481] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928489] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928498] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928506] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928522] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928530] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928538] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928547] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928555] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928563] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928571] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928580] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928596] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928604] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928613] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928629] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928637] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928645] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928653] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928669] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928677] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928685] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928699] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928708] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928716] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928735] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928743] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928751] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928759] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928775] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928783] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928792] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928800] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928808] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928816] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928824] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928832] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928840] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928849] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928857] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928865] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928873] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928881] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928890] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928898] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928906] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928914] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928922] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928930] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928938] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928946] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928954] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928962] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928970] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928978] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928986] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.928995] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929003] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929011] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929019] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929028] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929044] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929060] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929068] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929076] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929084] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929092] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929100] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929108] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929124] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929132] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929140] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929148] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929167] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.984 [2024-04-23 03:04:12.929185] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929194] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929202] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929218] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929226] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929234] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf4ee0 is same with the state(5) to be set 00:19:33.985 [2024-04-23 03:04:12.929679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:40272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:31624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:104032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.929988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:87504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.929999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:34728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:52880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:86192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:32448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:109864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:7256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:27760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.985 [2024-04-23 03:04:12.930952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.985 [2024-04-23 03:04:12.930962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.930984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.930996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:4376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:111528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:104192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:48664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:70448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:114264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:34360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:41248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:84264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:100800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:112472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:111184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:52728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.986 [2024-04-23 03:04:12.931874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.986 [2024-04-23 03:04:12.931886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.931896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.931912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:125648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.931922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.931934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.931944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.931956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.931966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.931978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.931988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:48024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:65272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:34240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:115136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.987 [2024-04-23 03:04:12.932728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.987 [2024-04-23 03:04:12.932738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.932980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.932994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:48696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.933004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.933015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:33.988 [2024-04-23 03:04:12.933025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.933037] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1143ee0 is same with the state(5) to be set 00:19:33.988 [2024-04-23 03:04:12.933050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:33.988 [2024-04-23 03:04:12.933058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.988 [2024-04-23 03:04:12.933067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:94192 len:8 PRP1 0x0 PRP2 0x0 00:19:33.988 [2024-04-23 03:04:12.933077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.933119] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1143ee0 was disconnected and freed. reset controller. 00:19:33.988 [2024-04-23 03:04:12.934315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.988 [2024-04-23 03:04:12.934915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.935509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.988 [2024-04-23 03:04:12.935878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.936300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.988 [2024-04-23 03:04:12.936759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.937200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:33.988 [2024-04-23 03:04:12.937635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.988 [2024-04-23 03:04:12.938167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1f40 is same with the state(5) to be set 00:19:33.988 [2024-04-23 03:04:12.938830] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.988 [2024-04-23 03:04:12.939249] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1f40 (9): Bad file descriptor 00:19:33.988 [2024-04-23 03:04:12.939858] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.988 [2024-04-23 03:04:12.939942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.988 [2024-04-23 03:04:12.939989] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:33.988 [2024-04-23 03:04:12.940007] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1f40 with addr=10.0.0.2, port=4420 00:19:33.988 [2024-04-23 03:04:12.940019] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1f40 is same with the state(5) to be set 00:19:33.988 [2024-04-23 03:04:12.940040] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1f40 (9): Bad file descriptor 00:19:33.988 [2024-04-23 03:04:12.940057] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:33.988 [2024-04-23 03:04:12.940067] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:33.988 [2024-04-23 03:04:12.940078] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:33.988 [2024-04-23 03:04:12.940099] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:33.988 [2024-04-23 03:04:12.940112] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:33.988 03:04:12 -- host/timeout.sh@128 -- # wait 94348 00:19:35.891 [2024-04-23 03:04:14.940277] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.891 [2024-04-23 03:04:14.940747] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.891 [2024-04-23 03:04:14.941080] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:35.891 [2024-04-23 03:04:14.941360] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1f40 with addr=10.0.0.2, port=4420 00:19:35.891 [2024-04-23 03:04:14.941778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1f40 is same with the state(5) to be set 00:19:35.891 [2024-04-23 03:04:14.942213] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1f40 (9): Bad file descriptor 00:19:35.891 [2024-04-23 03:04:14.942636] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:35.891 [2024-04-23 03:04:14.943076] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:35.891 [2024-04-23 03:04:14.943561] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.891 [2024-04-23 03:04:14.943857] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:35.891 [2024-04-23 03:04:14.944185] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.825 [2024-04-23 03:04:16.944849] uring.c: 648:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.825 [2024-04-23 03:04:16.945357] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.825 [2024-04-23 03:04:16.945416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:37.825 [2024-04-23 03:04:16.945436] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10d1f40 with addr=10.0.0.2, port=4420 00:19:37.825 [2024-04-23 03:04:16.945450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d1f40 is same with the state(5) to be set 00:19:37.825 [2024-04-23 03:04:16.945481] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d1f40 (9): Bad file descriptor 00:19:37.825 [2024-04-23 03:04:16.945516] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:37.825 [2024-04-23 03:04:16.945529] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:37.825 [2024-04-23 03:04:16.945539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:37.825 [2024-04-23 03:04:16.945567] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:37.825 [2024-04-23 03:04:16.945580] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.357 [2024-04-23 03:04:18.945662] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:40.924 00:19:40.924 Latency(us) 00:19:40.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.924 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:40.924 NVMe0n1 : 8.16 1883.47 7.36 15.69 0.00 67474.81 8638.84 7046430.72 00:19:40.924 =================================================================================================================== 00:19:40.924 Total : 1883.47 7.36 15.69 0.00 67474.81 8638.84 7046430.72 00:19:40.924 0 00:19:40.924 03:04:19 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:40.924 Attaching 5 probes... 00:19:40.924 1313.335846: reset bdev controller NVMe0 00:19:40.924 1314.290643: reconnect bdev controller NVMe0 00:19:40.924 3314.671359: reconnect delay bdev controller NVMe0 00:19:40.924 3314.691975: reconnect bdev controller NVMe0 00:19:40.924 5319.239920: reconnect delay bdev controller NVMe0 00:19:40.924 5319.257492: reconnect bdev controller NVMe0 00:19:40.924 7320.122314: reconnect delay bdev controller NVMe0 00:19:40.924 7320.158091: reconnect bdev controller NVMe0 00:19:40.924 03:04:19 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:40.924 03:04:19 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:40.924 03:04:19 -- host/timeout.sh@136 -- # kill 94306 00:19:40.924 03:04:19 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:40.924 03:04:19 -- host/timeout.sh@139 -- # killprocess 94298 00:19:40.924 03:04:19 -- common/autotest_common.sh@936 -- # '[' -z 94298 ']' 00:19:40.924 03:04:19 -- common/autotest_common.sh@940 -- # kill -0 94298 00:19:40.924 03:04:19 -- common/autotest_common.sh@941 -- # uname 00:19:40.924 03:04:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:40.924 03:04:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94298 00:19:40.924 killing process with pid 94298 00:19:40.924 Received shutdown signal, test time was about 8.217728 seconds 00:19:40.924 00:19:40.924 Latency(us) 00:19:40.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:40.924 =================================================================================================================== 00:19:40.924 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:40.924 03:04:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:40.925 03:04:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:40.925 03:04:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94298' 00:19:40.925 03:04:19 -- common/autotest_common.sh@955 -- # kill 94298 00:19:40.925 03:04:19 -- common/autotest_common.sh@960 -- # wait 94298 00:19:41.182 03:04:20 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:41.441 03:04:20 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:41.441 03:04:20 -- host/timeout.sh@145 -- # nvmftestfini 00:19:41.441 03:04:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:41.441 03:04:20 -- nvmf/common.sh@117 -- # sync 00:19:41.441 03:04:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:41.441 03:04:20 -- nvmf/common.sh@120 -- # set +e 00:19:41.441 03:04:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:41.441 03:04:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:41.441 rmmod nvme_tcp 00:19:41.441 rmmod nvme_fabrics 00:19:41.441 rmmod nvme_keyring 00:19:41.441 03:04:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:41.441 03:04:20 -- nvmf/common.sh@124 -- # set -e 00:19:41.441 03:04:20 -- nvmf/common.sh@125 -- # return 0 00:19:41.441 03:04:20 -- nvmf/common.sh@478 -- # '[' -n 93879 ']' 00:19:41.441 03:04:20 -- nvmf/common.sh@479 -- # killprocess 93879 00:19:41.441 03:04:20 -- common/autotest_common.sh@936 -- # '[' -z 93879 ']' 00:19:41.441 03:04:20 -- common/autotest_common.sh@940 -- # kill -0 93879 00:19:41.441 03:04:20 -- common/autotest_common.sh@941 -- # uname 00:19:41.441 03:04:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:41.441 03:04:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93879 00:19:41.441 killing process with pid 93879 00:19:41.441 03:04:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:41.441 03:04:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:41.441 03:04:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93879' 00:19:41.441 03:04:20 -- common/autotest_common.sh@955 -- # kill 93879 00:19:41.441 03:04:20 -- common/autotest_common.sh@960 -- # wait 93879 00:19:41.699 03:04:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:41.699 03:04:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:41.699 03:04:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:41.699 03:04:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.699 03:04:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.699 03:04:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.699 03:04:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.699 03:04:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.699 03:04:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:41.699 ************************************ 00:19:41.699 END TEST nvmf_timeout 00:19:41.699 ************************************ 00:19:41.699 00:19:41.699 real 0m44.762s 00:19:41.699 user 2m11.616s 00:19:41.699 sys 0m5.415s 00:19:41.699 03:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:41.699 03:04:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.699 03:04:20 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:19:41.699 03:04:20 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:19:41.699 03:04:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:41.699 03:04:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.699 03:04:20 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:19:41.699 ************************************ 00:19:41.699 END TEST nvmf_tcp 00:19:41.699 ************************************ 00:19:41.699 00:19:41.699 real 10m50.822s 00:19:41.699 user 28m40.788s 00:19:41.699 sys 3m27.983s 00:19:41.699 03:04:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:41.699 03:04:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.958 03:04:20 -- spdk/autotest.sh@286 -- # [[ 1 -eq 0 ]] 00:19:41.958 03:04:20 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:41.958 03:04:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:41.958 03:04:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:41.958 03:04:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.958 ************************************ 00:19:41.958 START TEST nvmf_dif 00:19:41.958 ************************************ 00:19:41.958 03:04:20 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:41.958 * Looking for test storage... 00:19:41.958 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:41.958 03:04:21 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.958 03:04:21 -- nvmf/common.sh@7 -- # uname -s 00:19:41.958 03:04:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.958 03:04:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.958 03:04:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.958 03:04:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.958 03:04:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.958 03:04:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.959 03:04:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.959 03:04:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.959 03:04:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.959 03:04:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.959 03:04:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:19:41.959 03:04:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:19:41.959 03:04:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.959 03:04:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.959 03:04:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.959 03:04:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.959 03:04:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.959 03:04:21 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.959 03:04:21 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.959 03:04:21 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.959 03:04:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.959 03:04:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.959 03:04:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.959 03:04:21 -- paths/export.sh@5 -- # export PATH 00:19:41.959 03:04:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.959 03:04:21 -- nvmf/common.sh@47 -- # : 0 00:19:41.959 03:04:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:41.959 03:04:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:41.959 03:04:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.959 03:04:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.959 03:04:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.959 03:04:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:41.959 03:04:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:41.959 03:04:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:41.959 03:04:21 -- target/dif.sh@15 -- # NULL_META=16 00:19:41.959 03:04:21 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:41.959 03:04:21 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:41.959 03:04:21 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:41.959 03:04:21 -- target/dif.sh@135 -- # nvmftestinit 00:19:41.959 03:04:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:41.959 03:04:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.959 03:04:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:41.959 03:04:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:41.959 03:04:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:41.959 03:04:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.959 03:04:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:41.959 03:04:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.959 03:04:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:41.959 03:04:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:41.959 03:04:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:41.959 03:04:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:41.959 03:04:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:41.959 03:04:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:41.959 03:04:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.959 03:04:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:41.959 03:04:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:41.959 03:04:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:41.959 03:04:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.959 03:04:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.959 03:04:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.959 03:04:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.959 03:04:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.959 03:04:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.959 03:04:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.959 03:04:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.959 03:04:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:41.959 03:04:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:41.959 Cannot find device "nvmf_tgt_br" 00:19:41.959 03:04:21 -- nvmf/common.sh@155 -- # true 00:19:41.959 03:04:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.959 Cannot find device "nvmf_tgt_br2" 00:19:41.959 03:04:21 -- nvmf/common.sh@156 -- # true 00:19:41.959 03:04:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:41.959 03:04:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:41.959 Cannot find device "nvmf_tgt_br" 00:19:41.959 03:04:21 -- nvmf/common.sh@158 -- # true 00:19:41.959 03:04:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:42.217 Cannot find device "nvmf_tgt_br2" 00:19:42.217 03:04:21 -- nvmf/common.sh@159 -- # true 00:19:42.217 03:04:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:42.217 03:04:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:42.217 03:04:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:42.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.217 03:04:21 -- nvmf/common.sh@162 -- # true 00:19:42.217 03:04:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:42.217 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:42.217 03:04:21 -- nvmf/common.sh@163 -- # true 00:19:42.217 03:04:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:42.217 03:04:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:42.217 03:04:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:42.217 03:04:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:42.217 03:04:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:42.217 03:04:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:42.217 03:04:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:42.217 03:04:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:42.217 03:04:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:42.217 03:04:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:42.217 03:04:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:42.217 03:04:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:42.218 03:04:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:42.218 03:04:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:42.218 03:04:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:42.218 03:04:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:42.218 03:04:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:42.218 03:04:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:42.218 03:04:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:42.218 03:04:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:42.218 03:04:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:42.218 03:04:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:42.218 03:04:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:42.218 03:04:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:42.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:42.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:42.218 00:19:42.218 --- 10.0.0.2 ping statistics --- 00:19:42.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.218 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:42.218 03:04:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:42.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:42.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:19:42.218 00:19:42.218 --- 10.0.0.3 ping statistics --- 00:19:42.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.218 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:42.218 03:04:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:42.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:42.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:42.218 00:19:42.218 --- 10.0.0.1 ping statistics --- 00:19:42.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:42.218 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:42.218 03:04:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:42.218 03:04:21 -- nvmf/common.sh@422 -- # return 0 00:19:42.218 03:04:21 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:19:42.218 03:04:21 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:42.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:42.785 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:42.785 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:42.785 03:04:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:42.785 03:04:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:42.785 03:04:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:42.785 03:04:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:42.785 03:04:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:42.785 03:04:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:42.785 03:04:21 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:42.785 03:04:21 -- target/dif.sh@137 -- # nvmfappstart 00:19:42.785 03:04:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:42.785 03:04:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:42.785 03:04:21 -- common/autotest_common.sh@10 -- # set +x 00:19:42.785 03:04:21 -- nvmf/common.sh@470 -- # nvmfpid=94787 00:19:42.785 03:04:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:42.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:42.785 03:04:21 -- nvmf/common.sh@471 -- # waitforlisten 94787 00:19:42.785 03:04:21 -- common/autotest_common.sh@817 -- # '[' -z 94787 ']' 00:19:42.785 03:04:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:42.785 03:04:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:42.785 03:04:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:42.785 03:04:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:42.785 03:04:21 -- common/autotest_common.sh@10 -- # set +x 00:19:42.785 [2024-04-23 03:04:21.858259] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:19:42.785 [2024-04-23 03:04:21.858564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.045 [2024-04-23 03:04:21.982057] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:19:43.045 [2024-04-23 03:04:22.003441] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.045 [2024-04-23 03:04:22.043627] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.045 [2024-04-23 03:04:22.043853] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.045 [2024-04-23 03:04:22.043877] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.045 [2024-04-23 03:04:22.043888] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.045 [2024-04-23 03:04:22.043896] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.045 [2024-04-23 03:04:22.043932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.045 03:04:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:43.045 03:04:22 -- common/autotest_common.sh@850 -- # return 0 00:19:43.045 03:04:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:43.045 03:04:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:43.045 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.045 03:04:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:43.045 03:04:22 -- target/dif.sh@139 -- # create_transport 00:19:43.045 03:04:22 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:43.045 03:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.045 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.045 [2024-04-23 03:04:22.174811] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:43.045 03:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.045 03:04:22 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:43.045 03:04:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:43.045 03:04:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:43.045 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 ************************************ 00:19:43.304 START TEST fio_dif_1_default 00:19:43.304 ************************************ 00:19:43.304 03:04:22 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:19:43.304 03:04:22 -- target/dif.sh@86 -- # create_subsystems 0 00:19:43.304 03:04:22 -- target/dif.sh@28 -- # local sub 00:19:43.304 03:04:22 -- target/dif.sh@30 -- # for sub in "$@" 00:19:43.304 03:04:22 -- target/dif.sh@31 -- # create_subsystem 0 00:19:43.304 03:04:22 -- target/dif.sh@18 -- # local sub_id=0 00:19:43.304 03:04:22 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:43.304 03:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.304 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 bdev_null0 00:19:43.304 03:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.304 03:04:22 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:43.304 03:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.304 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 03:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.304 03:04:22 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:43.304 03:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.304 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 03:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.304 03:04:22 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:43.304 03:04:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:43.304 03:04:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.304 [2024-04-23 03:04:22.298856] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:43.304 03:04:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:43.304 03:04:22 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:43.304 03:04:22 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:43.304 03:04:22 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:43.304 03:04:22 -- nvmf/common.sh@521 -- # config=() 00:19:43.304 03:04:22 -- nvmf/common.sh@521 -- # local subsystem config 00:19:43.304 03:04:22 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:43.304 03:04:22 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:43.304 { 00:19:43.304 "params": { 00:19:43.304 "name": "Nvme$subsystem", 00:19:43.304 "trtype": "$TEST_TRANSPORT", 00:19:43.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.304 "adrfam": "ipv4", 00:19:43.304 "trsvcid": "$NVMF_PORT", 00:19:43.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.304 "hdgst": ${hdgst:-false}, 00:19:43.304 "ddgst": ${ddgst:-false} 00:19:43.304 }, 00:19:43.304 "method": "bdev_nvme_attach_controller" 00:19:43.304 } 00:19:43.304 EOF 00:19:43.304 )") 00:19:43.304 03:04:22 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.304 03:04:22 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.304 03:04:22 -- target/dif.sh@82 -- # gen_fio_conf 00:19:43.304 03:04:22 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:43.304 03:04:22 -- target/dif.sh@54 -- # local file 00:19:43.304 03:04:22 -- target/dif.sh@56 -- # cat 00:19:43.304 03:04:22 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:43.304 03:04:22 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:43.304 03:04:22 -- nvmf/common.sh@543 -- # cat 00:19:43.304 03:04:22 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.304 03:04:22 -- common/autotest_common.sh@1327 -- # shift 00:19:43.304 03:04:22 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:43.304 03:04:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:43.304 03:04:22 -- nvmf/common.sh@545 -- # jq . 00:19:43.304 03:04:22 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:43.304 03:04:22 -- target/dif.sh@72 -- # (( file <= files )) 00:19:43.304 03:04:22 -- nvmf/common.sh@546 -- # IFS=, 00:19:43.304 03:04:22 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:43.304 "params": { 00:19:43.304 "name": "Nvme0", 00:19:43.304 "trtype": "tcp", 00:19:43.304 "traddr": "10.0.0.2", 00:19:43.304 "adrfam": "ipv4", 00:19:43.304 "trsvcid": "4420", 00:19:43.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:43.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:43.304 "hdgst": false, 00:19:43.304 "ddgst": false 00:19:43.304 }, 00:19:43.304 "method": "bdev_nvme_attach_controller" 00:19:43.304 }' 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:43.304 03:04:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:43.304 03:04:22 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:43.304 03:04:22 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:43.304 03:04:22 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:43.304 03:04:22 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:43.304 03:04:22 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:43.563 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:43.563 fio-3.35 00:19:43.563 Starting 1 thread 00:19:55.763 00:19:55.763 filename0: (groupid=0, jobs=1): err= 0: pid=94850: Tue Apr 23 03:04:32 2024 00:19:55.763 read: IOPS=7567, BW=29.6MiB/s (31.0MB/s)(296MiB/10001msec) 00:19:55.763 slat (usec): min=5, max=255, avg= 9.74, stdev= 5.69 00:19:55.763 clat (usec): min=323, max=4819, avg=499.67, stdev=69.71 00:19:55.763 lat (usec): min=329, max=4867, avg=509.41, stdev=70.59 00:19:55.763 clat percentiles (usec): 00:19:55.763 | 1.00th=[ 363], 5.00th=[ 400], 10.00th=[ 420], 20.00th=[ 449], 00:19:55.763 | 30.00th=[ 465], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 515], 00:19:55.763 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 578], 95.00th=[ 603], 00:19:55.763 | 99.00th=[ 644], 99.50th=[ 668], 99.90th=[ 766], 99.95th=[ 930], 00:19:55.763 | 99.99th=[ 1336] 00:19:55.763 bw ( KiB/s): min=28896, max=36224, per=100.00%, avg=30341.05, stdev=1886.26, samples=19 00:19:55.763 iops : min= 7224, max= 9056, avg=7585.26, stdev=471.56, samples=19 00:19:55.763 lat (usec) : 500=50.70%, 750=49.17%, 1000=0.09% 00:19:55.763 lat (msec) : 2=0.04%, 10=0.01% 00:19:55.763 cpu : usr=84.39%, sys=13.22%, ctx=97, majf=0, minf=0 00:19:55.763 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.763 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.763 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.763 issued rwts: total=75680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.763 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:55.763 00:19:55.763 Run status group 0 (all jobs): 00:19:55.763 READ: bw=29.6MiB/s (31.0MB/s), 29.6MiB/s-29.6MiB/s (31.0MB/s-31.0MB/s), io=296MiB (310MB), run=10001-10001msec 00:19:55.763 03:04:33 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:55.763 03:04:33 -- target/dif.sh@43 -- # local sub 00:19:55.763 03:04:33 -- target/dif.sh@45 -- # for sub in "$@" 00:19:55.763 03:04:33 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:55.763 03:04:33 -- target/dif.sh@36 -- # local sub_id=0 00:19:55.763 03:04:33 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 03:04:33 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 ************************************ 00:19:55.763 END TEST fio_dif_1_default 00:19:55.763 ************************************ 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 00:19:55.763 real 0m10.861s 00:19:55.763 user 0m8.986s 00:19:55.763 sys 0m1.559s 00:19:55.763 03:04:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 03:04:33 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:55.763 03:04:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:55.763 03:04:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 ************************************ 00:19:55.763 START TEST fio_dif_1_multi_subsystems 00:19:55.763 ************************************ 00:19:55.763 03:04:33 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:19:55.763 03:04:33 -- target/dif.sh@92 -- # local files=1 00:19:55.763 03:04:33 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:55.763 03:04:33 -- target/dif.sh@28 -- # local sub 00:19:55.763 03:04:33 -- target/dif.sh@30 -- # for sub in "$@" 00:19:55.763 03:04:33 -- target/dif.sh@31 -- # create_subsystem 0 00:19:55.763 03:04:33 -- target/dif.sh@18 -- # local sub_id=0 00:19:55.763 03:04:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 bdev_null0 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 03:04:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 03:04:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 03:04:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 [2024-04-23 03:04:33.287333] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.763 03:04:33 -- target/dif.sh@30 -- # for sub in "$@" 00:19:55.763 03:04:33 -- target/dif.sh@31 -- # create_subsystem 1 00:19:55.763 03:04:33 -- target/dif.sh@18 -- # local sub_id=1 00:19:55.763 03:04:33 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:55.763 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.763 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.763 bdev_null1 00:19:55.763 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.764 03:04:33 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:55.764 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.764 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.764 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.764 03:04:33 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:55.764 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.764 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.764 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.764 03:04:33 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.764 03:04:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:55.764 03:04:33 -- common/autotest_common.sh@10 -- # set +x 00:19:55.764 03:04:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:55.764 03:04:33 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:55.764 03:04:33 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:55.764 03:04:33 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:55.764 03:04:33 -- nvmf/common.sh@521 -- # config=() 00:19:55.764 03:04:33 -- nvmf/common.sh@521 -- # local subsystem config 00:19:55.764 03:04:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:55.764 03:04:33 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.764 03:04:33 -- target/dif.sh@82 -- # gen_fio_conf 00:19:55.764 03:04:33 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.764 03:04:33 -- target/dif.sh@54 -- # local file 00:19:55.764 03:04:33 -- target/dif.sh@56 -- # cat 00:19:55.764 03:04:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:55.764 { 00:19:55.764 "params": { 00:19:55.764 "name": "Nvme$subsystem", 00:19:55.764 "trtype": "$TEST_TRANSPORT", 00:19:55.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.764 "adrfam": "ipv4", 00:19:55.764 "trsvcid": "$NVMF_PORT", 00:19:55.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.764 "hdgst": ${hdgst:-false}, 00:19:55.764 "ddgst": ${ddgst:-false} 00:19:55.764 }, 00:19:55.764 "method": "bdev_nvme_attach_controller" 00:19:55.764 } 00:19:55.764 EOF 00:19:55.764 )") 00:19:55.764 03:04:33 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:19:55.764 03:04:33 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:55.764 03:04:33 -- common/autotest_common.sh@1325 -- # local sanitizers 00:19:55.764 03:04:33 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.764 03:04:33 -- common/autotest_common.sh@1327 -- # shift 00:19:55.764 03:04:33 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:19:55.764 03:04:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.764 03:04:33 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:55.764 03:04:33 -- target/dif.sh@72 -- # (( file <= files )) 00:19:55.764 03:04:33 -- nvmf/common.sh@543 -- # cat 00:19:55.764 03:04:33 -- target/dif.sh@73 -- # cat 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # grep libasan 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:55.764 03:04:33 -- target/dif.sh@72 -- # (( file++ )) 00:19:55.764 03:04:33 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:19:55.764 03:04:33 -- target/dif.sh@72 -- # (( file <= files )) 00:19:55.764 03:04:33 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:19:55.764 { 00:19:55.764 "params": { 00:19:55.764 "name": "Nvme$subsystem", 00:19:55.764 "trtype": "$TEST_TRANSPORT", 00:19:55.764 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:55.764 "adrfam": "ipv4", 00:19:55.764 "trsvcid": "$NVMF_PORT", 00:19:55.764 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:55.764 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:55.764 "hdgst": ${hdgst:-false}, 00:19:55.764 "ddgst": ${ddgst:-false} 00:19:55.764 }, 00:19:55.764 "method": "bdev_nvme_attach_controller" 00:19:55.764 } 00:19:55.764 EOF 00:19:55.764 )") 00:19:55.764 03:04:33 -- nvmf/common.sh@543 -- # cat 00:19:55.764 03:04:33 -- nvmf/common.sh@545 -- # jq . 00:19:55.764 03:04:33 -- nvmf/common.sh@546 -- # IFS=, 00:19:55.764 03:04:33 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:19:55.764 "params": { 00:19:55.764 "name": "Nvme0", 00:19:55.764 "trtype": "tcp", 00:19:55.764 "traddr": "10.0.0.2", 00:19:55.764 "adrfam": "ipv4", 00:19:55.764 "trsvcid": "4420", 00:19:55.764 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:55.764 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:55.764 "hdgst": false, 00:19:55.764 "ddgst": false 00:19:55.764 }, 00:19:55.764 "method": "bdev_nvme_attach_controller" 00:19:55.764 },{ 00:19:55.764 "params": { 00:19:55.764 "name": "Nvme1", 00:19:55.764 "trtype": "tcp", 00:19:55.764 "traddr": "10.0.0.2", 00:19:55.764 "adrfam": "ipv4", 00:19:55.764 "trsvcid": "4420", 00:19:55.764 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:55.764 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:55.764 "hdgst": false, 00:19:55.764 "ddgst": false 00:19:55.764 }, 00:19:55.764 "method": "bdev_nvme_attach_controller" 00:19:55.764 }' 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:55.764 03:04:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:55.764 03:04:33 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:19:55.764 03:04:33 -- common/autotest_common.sh@1331 -- # asan_lib= 00:19:55.764 03:04:33 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:19:55.764 03:04:33 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:55.764 03:04:33 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:55.764 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:55.764 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:55.764 fio-3.35 00:19:55.764 Starting 2 threads 00:20:05.767 00:20:05.767 filename0: (groupid=0, jobs=1): err= 0: pid=95013: Tue Apr 23 03:04:44 2024 00:20:05.767 read: IOPS=4254, BW=16.6MiB/s (17.4MB/s)(166MiB/10001msec) 00:20:05.767 slat (usec): min=6, max=168, avg=14.86, stdev= 6.10 00:20:05.767 clat (usec): min=461, max=6449, avg=899.99, stdev=96.51 00:20:05.767 lat (usec): min=468, max=6477, avg=914.85, stdev=97.63 00:20:05.767 clat percentiles (usec): 00:20:05.767 | 1.00th=[ 742], 5.00th=[ 783], 10.00th=[ 807], 20.00th=[ 832], 00:20:05.767 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 898], 60.00th=[ 914], 00:20:05.767 | 70.00th=[ 938], 80.00th=[ 963], 90.00th=[ 1004], 95.00th=[ 1037], 00:20:05.767 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1336], 99.95th=[ 1450], 00:20:05.767 | 99.99th=[ 1565] 00:20:05.767 bw ( KiB/s): min=16416, max=17280, per=50.09%, avg=17034.11, stdev=200.42, samples=19 00:20:05.767 iops : min= 4104, max= 4320, avg=4258.53, stdev=50.11, samples=19 00:20:05.767 lat (usec) : 500=0.01%, 750=1.45%, 1000=88.42% 00:20:05.767 lat (msec) : 2=10.12%, 10=0.01% 00:20:05.767 cpu : usr=89.19%, sys=9.28%, ctx=56, majf=0, minf=0 00:20:05.767 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.767 issued rwts: total=42548,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.767 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:05.767 filename1: (groupid=0, jobs=1): err= 0: pid=95014: Tue Apr 23 03:04:44 2024 00:20:05.767 read: IOPS=4247, BW=16.6MiB/s (17.4MB/s)(166MiB/10001msec) 00:20:05.767 slat (nsec): min=4622, max=81858, avg=14735.26, stdev=5890.06 00:20:05.767 clat (usec): min=449, max=7037, avg=900.98, stdev=111.59 00:20:05.767 lat (usec): min=457, max=7052, avg=915.72, stdev=112.04 00:20:05.767 clat percentiles (usec): 00:20:05.767 | 1.00th=[ 783], 5.00th=[ 807], 10.00th=[ 816], 20.00th=[ 840], 00:20:05.767 | 30.00th=[ 857], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 906], 00:20:05.767 | 70.00th=[ 930], 80.00th=[ 955], 90.00th=[ 996], 95.00th=[ 1029], 00:20:05.767 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1401], 99.95th=[ 1549], 00:20:05.767 | 99.99th=[ 4883] 00:20:05.767 bw ( KiB/s): min=16416, max=17248, per=50.00%, avg=17003.79, stdev=236.17, samples=19 00:20:05.767 iops : min= 4104, max= 4312, avg=4250.95, stdev=59.04, samples=19 00:20:05.767 lat (usec) : 500=0.05%, 750=0.06%, 1000=91.51% 00:20:05.767 lat (msec) : 2=8.35%, 10=0.04% 00:20:05.767 cpu : usr=89.84%, sys=8.54%, ctx=62, majf=0, minf=0 00:20:05.767 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:05.767 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.767 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:05.767 issued rwts: total=42480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:05.767 latency : target=0, window=0, percentile=100.00%, depth=4 00:20:05.767 00:20:05.767 Run status group 0 (all jobs): 00:20:05.767 READ: bw=33.2MiB/s (34.8MB/s), 16.6MiB/s-16.6MiB/s (17.4MB/s-17.4MB/s), io=332MiB (348MB), run=10001-10001msec 00:20:05.767 03:04:44 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:20:05.767 03:04:44 -- target/dif.sh@43 -- # local sub 00:20:05.767 03:04:44 -- target/dif.sh@45 -- # for sub in "$@" 00:20:05.767 03:04:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:05.767 03:04:44 -- target/dif.sh@36 -- # local sub_id=0 00:20:05.767 03:04:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@45 -- # for sub in "$@" 00:20:05.767 03:04:44 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:05.767 03:04:44 -- target/dif.sh@36 -- # local sub_id=1 00:20:05.767 03:04:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 ************************************ 00:20:05.767 END TEST fio_dif_1_multi_subsystems 00:20:05.767 ************************************ 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 00:20:05.767 real 0m10.987s 00:20:05.767 user 0m18.556s 00:20:05.767 sys 0m2.055s 00:20:05.767 03:04:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:20:05.767 03:04:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:05.767 03:04:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 ************************************ 00:20:05.767 START TEST fio_dif_rand_params 00:20:05.767 ************************************ 00:20:05.767 03:04:44 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:20:05.767 03:04:44 -- target/dif.sh@100 -- # local NULL_DIF 00:20:05.767 03:04:44 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:20:05.767 03:04:44 -- target/dif.sh@103 -- # NULL_DIF=3 00:20:05.767 03:04:44 -- target/dif.sh@103 -- # bs=128k 00:20:05.767 03:04:44 -- target/dif.sh@103 -- # numjobs=3 00:20:05.767 03:04:44 -- target/dif.sh@103 -- # iodepth=3 00:20:05.767 03:04:44 -- target/dif.sh@103 -- # runtime=5 00:20:05.767 03:04:44 -- target/dif.sh@105 -- # create_subsystems 0 00:20:05.767 03:04:44 -- target/dif.sh@28 -- # local sub 00:20:05.767 03:04:44 -- target/dif.sh@30 -- # for sub in "$@" 00:20:05.767 03:04:44 -- target/dif.sh@31 -- # create_subsystem 0 00:20:05.767 03:04:44 -- target/dif.sh@18 -- # local sub_id=0 00:20:05.767 03:04:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 bdev_null0 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:05.767 03:04:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:05.767 03:04:44 -- common/autotest_common.sh@10 -- # set +x 00:20:05.767 [2024-04-23 03:04:44.417544] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.767 03:04:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:05.767 03:04:44 -- target/dif.sh@106 -- # fio /dev/fd/62 00:20:05.767 03:04:44 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:20:05.767 03:04:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:05.767 03:04:44 -- nvmf/common.sh@521 -- # config=() 00:20:05.767 03:04:44 -- nvmf/common.sh@521 -- # local subsystem config 00:20:05.767 03:04:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:05.767 03:04:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:05.767 { 00:20:05.767 "params": { 00:20:05.767 "name": "Nvme$subsystem", 00:20:05.767 "trtype": "$TEST_TRANSPORT", 00:20:05.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:05.767 "adrfam": "ipv4", 00:20:05.767 "trsvcid": "$NVMF_PORT", 00:20:05.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:05.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:05.767 "hdgst": ${hdgst:-false}, 00:20:05.767 "ddgst": ${ddgst:-false} 00:20:05.767 }, 00:20:05.767 "method": "bdev_nvme_attach_controller" 00:20:05.767 } 00:20:05.767 EOF 00:20:05.767 )") 00:20:05.767 03:04:44 -- target/dif.sh@82 -- # gen_fio_conf 00:20:05.767 03:04:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.767 03:04:44 -- target/dif.sh@54 -- # local file 00:20:05.767 03:04:44 -- target/dif.sh@56 -- # cat 00:20:05.768 03:04:44 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.768 03:04:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:05.768 03:04:44 -- nvmf/common.sh@543 -- # cat 00:20:05.768 03:04:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:05.768 03:04:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:05.768 03:04:44 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.768 03:04:44 -- common/autotest_common.sh@1327 -- # shift 00:20:05.768 03:04:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:05.768 03:04:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.768 03:04:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:05.768 03:04:44 -- target/dif.sh@72 -- # (( file <= files )) 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:05.768 03:04:44 -- nvmf/common.sh@545 -- # jq . 00:20:05.768 03:04:44 -- nvmf/common.sh@546 -- # IFS=, 00:20:05.768 03:04:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:05.768 "params": { 00:20:05.768 "name": "Nvme0", 00:20:05.768 "trtype": "tcp", 00:20:05.768 "traddr": "10.0.0.2", 00:20:05.768 "adrfam": "ipv4", 00:20:05.768 "trsvcid": "4420", 00:20:05.768 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:05.768 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:05.768 "hdgst": false, 00:20:05.768 "ddgst": false 00:20:05.768 }, 00:20:05.768 "method": "bdev_nvme_attach_controller" 00:20:05.768 }' 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:05.768 03:04:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:05.768 03:04:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:05.768 03:04:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:05.768 03:04:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:05.768 03:04:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:05.768 03:04:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:05.768 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:05.768 ... 00:20:05.768 fio-3.35 00:20:05.768 Starting 3 threads 00:20:11.036 00:20:11.037 filename0: (groupid=0, jobs=1): err= 0: pid=95169: Tue Apr 23 03:04:50 2024 00:20:11.037 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5008msec) 00:20:11.037 slat (nsec): min=7634, max=47676, avg=15886.87, stdev=5227.34 00:20:11.037 clat (usec): min=7356, max=14007, avg=12770.63, stdev=621.36 00:20:11.037 lat (usec): min=7378, max=14022, avg=12786.52, stdev=621.80 00:20:11.037 clat percentiles (usec): 00:20:11.037 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:20:11.037 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12780], 60.00th=[13042], 00:20:11.037 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13698], 00:20:11.037 | 99.00th=[13829], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960], 00:20:11.037 | 99.99th=[13960] 00:20:11.037 bw ( KiB/s): min=29184, max=30720, per=33.25%, avg=29866.67, stdev=461.51, samples=9 00:20:11.037 iops : min= 228, max= 240, avg=233.33, stdev= 3.61, samples=9 00:20:11.037 lat (msec) : 10=0.26%, 20=99.74% 00:20:11.037 cpu : usr=91.69%, sys=7.71%, ctx=13, majf=0, minf=0 00:20:11.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 issued rwts: total=1173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:11.037 filename0: (groupid=0, jobs=1): err= 0: pid=95170: Tue Apr 23 03:04:50 2024 00:20:11.037 read: IOPS=234, BW=29.3MiB/s (30.7MB/s)(147MiB/5010msec) 00:20:11.037 slat (nsec): min=7480, max=44663, avg=11263.39, stdev=5075.82 00:20:11.037 clat (usec): min=9810, max=14002, avg=12782.50, stdev=584.00 00:20:11.037 lat (usec): min=9818, max=14045, avg=12793.76, stdev=584.30 00:20:11.037 clat percentiles (usec): 00:20:11.037 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:20:11.037 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:20:11.037 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:20:11.037 | 99.00th=[13960], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960], 00:20:11.037 | 99.99th=[13960] 00:20:11.037 bw ( KiB/s): min=29184, max=30720, per=33.34%, avg=29952.00, stdev=512.00, samples=10 00:20:11.037 iops : min= 228, max= 240, avg=234.00, stdev= 4.00, samples=10 00:20:11.037 lat (msec) : 10=0.26%, 20=99.74% 00:20:11.037 cpu : usr=91.36%, sys=7.95%, ctx=103, majf=0, minf=0 00:20:11.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 issued rwts: total=1173,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:11.037 filename0: (groupid=0, jobs=1): err= 0: pid=95171: Tue Apr 23 03:04:50 2024 00:20:11.037 read: IOPS=233, BW=29.2MiB/s (30.7MB/s)(146MiB/5001msec) 00:20:11.037 slat (nsec): min=8009, max=53458, avg=15279.62, stdev=4982.50 00:20:11.037 clat (usec): min=11778, max=14016, avg=12787.73, stdev=559.57 00:20:11.037 lat (usec): min=11791, max=14028, avg=12803.01, stdev=559.88 00:20:11.037 clat percentiles (usec): 00:20:11.037 | 1.00th=[11863], 5.00th=[11994], 10.00th=[11994], 20.00th=[12125], 00:20:11.037 | 30.00th=[12387], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:20:11.037 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[13698], 00:20:11.037 | 99.00th=[13829], 99.50th=[13960], 99.90th=[13960], 99.95th=[13960], 00:20:11.037 | 99.99th=[13960] 00:20:11.037 bw ( KiB/s): min=29184, max=30720, per=33.25%, avg=29866.67, stdev=461.51, samples=9 00:20:11.037 iops : min= 228, max= 240, avg=233.33, stdev= 3.61, samples=9 00:20:11.037 lat (msec) : 20=100.00% 00:20:11.037 cpu : usr=92.08%, sys=7.34%, ctx=70, majf=0, minf=0 00:20:11.037 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:11.037 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.037 issued rwts: total=1170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.037 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:11.037 00:20:11.037 Run status group 0 (all jobs): 00:20:11.037 READ: bw=87.7MiB/s (92.0MB/s), 29.2MiB/s-29.3MiB/s (30.7MB/s-30.7MB/s), io=440MiB (461MB), run=5001-5010msec 00:20:11.296 03:04:50 -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:11.296 03:04:50 -- target/dif.sh@43 -- # local sub 00:20:11.296 03:04:50 -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.296 03:04:50 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:11.296 03:04:50 -- target/dif.sh@36 -- # local sub_id=0 00:20:11.296 03:04:50 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.296 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.296 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.296 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.296 03:04:50 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:11.296 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.296 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.296 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.296 03:04:50 -- target/dif.sh@109 -- # NULL_DIF=2 00:20:11.296 03:04:50 -- target/dif.sh@109 -- # bs=4k 00:20:11.296 03:04:50 -- target/dif.sh@109 -- # numjobs=8 00:20:11.297 03:04:50 -- target/dif.sh@109 -- # iodepth=16 00:20:11.297 03:04:50 -- target/dif.sh@109 -- # runtime= 00:20:11.297 03:04:50 -- target/dif.sh@109 -- # files=2 00:20:11.297 03:04:50 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:11.297 03:04:50 -- target/dif.sh@28 -- # local sub 00:20:11.297 03:04:50 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.297 03:04:50 -- target/dif.sh@31 -- # create_subsystem 0 00:20:11.297 03:04:50 -- target/dif.sh@18 -- # local sub_id=0 00:20:11.297 03:04:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 bdev_null0 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 [2024-04-23 03:04:50.320758] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.297 03:04:50 -- target/dif.sh@31 -- # create_subsystem 1 00:20:11.297 03:04:50 -- target/dif.sh@18 -- # local sub_id=1 00:20:11.297 03:04:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 bdev_null1 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.297 03:04:50 -- target/dif.sh@31 -- # create_subsystem 2 00:20:11.297 03:04:50 -- target/dif.sh@18 -- # local sub_id=2 00:20:11.297 03:04:50 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 bdev_null2 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:11.297 03:04:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.297 03:04:50 -- common/autotest_common.sh@10 -- # set +x 00:20:11.297 03:04:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.297 03:04:50 -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:11.297 03:04:50 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:11.297 03:04:50 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:11.297 03:04:50 -- nvmf/common.sh@521 -- # config=() 00:20:11.297 03:04:50 -- nvmf/common.sh@521 -- # local subsystem config 00:20:11.297 03:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:11.297 { 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme$subsystem", 00:20:11.297 "trtype": "$TEST_TRANSPORT", 00:20:11.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.297 "adrfam": "ipv4", 00:20:11.297 "trsvcid": "$NVMF_PORT", 00:20:11.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.297 "hdgst": ${hdgst:-false}, 00:20:11.297 "ddgst": ${ddgst:-false} 00:20:11.297 }, 00:20:11.297 "method": "bdev_nvme_attach_controller" 00:20:11.297 } 00:20:11.297 EOF 00:20:11.297 )") 00:20:11.297 03:04:50 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.297 03:04:50 -- target/dif.sh@82 -- # gen_fio_conf 00:20:11.297 03:04:50 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.297 03:04:50 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:11.297 03:04:50 -- target/dif.sh@54 -- # local file 00:20:11.297 03:04:50 -- target/dif.sh@56 -- # cat 00:20:11.297 03:04:50 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.297 03:04:50 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:11.297 03:04:50 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # cat 00:20:11.297 03:04:50 -- common/autotest_common.sh@1327 -- # shift 00:20:11.297 03:04:50 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:11.297 03:04:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.297 03:04:50 -- target/dif.sh@73 -- # cat 00:20:11.297 03:04:50 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.297 03:04:50 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:11.297 03:04:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file++ )) 00:20:11.297 03:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:11.297 { 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme$subsystem", 00:20:11.297 "trtype": "$TEST_TRANSPORT", 00:20:11.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.297 "adrfam": "ipv4", 00:20:11.297 "trsvcid": "$NVMF_PORT", 00:20:11.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.297 "hdgst": ${hdgst:-false}, 00:20:11.297 "ddgst": ${ddgst:-false} 00:20:11.297 }, 00:20:11.297 "method": "bdev_nvme_attach_controller" 00:20:11.297 } 00:20:11.297 EOF 00:20:11.297 )") 00:20:11.297 03:04:50 -- target/dif.sh@73 -- # cat 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # cat 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file++ )) 00:20:11.297 03:04:50 -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.297 03:04:50 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:11.297 { 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme$subsystem", 00:20:11.297 "trtype": "$TEST_TRANSPORT", 00:20:11.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.297 "adrfam": "ipv4", 00:20:11.297 "trsvcid": "$NVMF_PORT", 00:20:11.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.297 "hdgst": ${hdgst:-false}, 00:20:11.297 "ddgst": ${ddgst:-false} 00:20:11.297 }, 00:20:11.297 "method": "bdev_nvme_attach_controller" 00:20:11.297 } 00:20:11.297 EOF 00:20:11.297 )") 00:20:11.297 03:04:50 -- nvmf/common.sh@543 -- # cat 00:20:11.297 03:04:50 -- nvmf/common.sh@545 -- # jq . 00:20:11.297 03:04:50 -- nvmf/common.sh@546 -- # IFS=, 00:20:11.297 03:04:50 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme0", 00:20:11.297 "trtype": "tcp", 00:20:11.297 "traddr": "10.0.0.2", 00:20:11.297 "adrfam": "ipv4", 00:20:11.297 "trsvcid": "4420", 00:20:11.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.297 "hdgst": false, 00:20:11.297 "ddgst": false 00:20:11.297 }, 00:20:11.297 "method": "bdev_nvme_attach_controller" 00:20:11.297 },{ 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme1", 00:20:11.297 "trtype": "tcp", 00:20:11.297 "traddr": "10.0.0.2", 00:20:11.297 "adrfam": "ipv4", 00:20:11.297 "trsvcid": "4420", 00:20:11.297 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.297 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.297 "hdgst": false, 00:20:11.297 "ddgst": false 00:20:11.297 }, 00:20:11.297 "method": "bdev_nvme_attach_controller" 00:20:11.297 },{ 00:20:11.297 "params": { 00:20:11.297 "name": "Nvme2", 00:20:11.297 "trtype": "tcp", 00:20:11.297 "traddr": "10.0.0.2", 00:20:11.298 "adrfam": "ipv4", 00:20:11.298 "trsvcid": "4420", 00:20:11.298 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.298 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.298 "hdgst": false, 00:20:11.298 "ddgst": false 00:20:11.298 }, 00:20:11.298 "method": "bdev_nvme_attach_controller" 00:20:11.298 }' 00:20:11.298 03:04:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:11.298 03:04:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:11.298 03:04:50 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.298 03:04:50 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.298 03:04:50 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:11.298 03:04:50 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:11.556 03:04:50 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:11.556 03:04:50 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:11.556 03:04:50 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.557 03:04:50 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.557 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:11.557 ... 00:20:11.557 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:11.557 ... 00:20:11.557 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:11.557 ... 00:20:11.557 fio-3.35 00:20:11.557 Starting 24 threads 00:20:23.789 00:20:23.789 filename0: (groupid=0, jobs=1): err= 0: pid=95269: Tue Apr 23 03:05:01 2024 00:20:23.789 read: IOPS=182, BW=729KiB/s (746kB/s)(7316KiB/10037msec) 00:20:23.789 slat (usec): min=3, max=10024, avg=19.44, stdev=234.13 00:20:23.789 clat (msec): min=24, max=136, avg=87.70, stdev=22.86 00:20:23.789 lat (msec): min=24, max=136, avg=87.72, stdev=22.86 00:20:23.789 clat percentiles (msec): 00:20:23.789 | 1.00th=[ 35], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 70], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 83], 60.00th=[ 96], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:20:23.790 | 99.00th=[ 126], 99.50th=[ 126], 99.90th=[ 134], 99.95th=[ 138], 00:20:23.790 | 99.99th=[ 138] 00:20:23.790 bw ( KiB/s): min= 616, max= 1024, per=4.29%, avg=725.20, stdev=119.18, samples=20 00:20:23.790 iops : min= 154, max= 256, avg=181.30, stdev=29.79, samples=20 00:20:23.790 lat (msec) : 50=4.76%, 100=57.30%, 250=37.94% 00:20:23.790 cpu : usr=31.14%, sys=1.96%, ctx=1074, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=82.9%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95270: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=177, BW=710KiB/s (727kB/s)(7128KiB/10038msec) 00:20:23.790 slat (usec): min=7, max=8026, avg=18.74, stdev=189.87 00:20:23.790 clat (msec): min=27, max=146, avg=90.01, stdev=23.88 00:20:23.790 lat (msec): min=27, max=146, avg=90.03, stdev=23.88 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 72], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 80], 50.00th=[ 86], 60.00th=[ 105], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 121], 00:20:23.790 | 99.00th=[ 131], 99.50th=[ 134], 99.90th=[ 148], 99.95th=[ 148], 00:20:23.790 | 99.99th=[ 148] 00:20:23.790 bw ( KiB/s): min= 560, max= 1008, per=4.18%, avg=706.40, stdev=131.35, samples=20 00:20:23.790 iops : min= 140, max= 252, avg=176.60, stdev=32.84, samples=20 00:20:23.790 lat (msec) : 50=6.29%, 100=50.17%, 250=43.55% 00:20:23.790 cpu : usr=37.67%, sys=2.22%, ctx=1144, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.6%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=87.6%, 8=12.2%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95271: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=181, BW=728KiB/s (745kB/s)(7296KiB/10028msec) 00:20:23.790 slat (usec): min=4, max=8026, avg=23.76, stdev=265.25 00:20:23.790 clat (msec): min=23, max=143, avg=87.86, stdev=22.95 00:20:23.790 lat (msec): min=23, max=143, avg=87.89, stdev=22.95 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 40], 5.00th=[ 50], 10.00th=[ 61], 20.00th=[ 72], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 96], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.790 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 142], 99.95th=[ 144], 00:20:23.790 | 99.99th=[ 144] 00:20:23.790 bw ( KiB/s): min= 632, max= 976, per=4.28%, avg=723.20, stdev=97.30, samples=20 00:20:23.790 iops : min= 158, max= 244, avg=180.80, stdev=24.33, samples=20 00:20:23.790 lat (msec) : 50=5.54%, 100=57.68%, 250=36.79% 00:20:23.790 cpu : usr=31.10%, sys=1.88%, ctx=906, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=0.4%, 4=1.5%, 8=82.5%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=87.2%, 8=12.5%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95272: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=178, BW=713KiB/s (730kB/s)(7136KiB/10009msec) 00:20:23.790 slat (usec): min=4, max=6454, avg=25.20, stdev=224.19 00:20:23.790 clat (msec): min=44, max=136, avg=89.58, stdev=21.41 00:20:23.790 lat (msec): min=44, max=136, avg=89.61, stdev=21.41 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 46], 5.00th=[ 57], 10.00th=[ 67], 20.00th=[ 71], 00:20:23.790 | 30.00th=[ 74], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 103], 00:20:23.790 | 70.00th=[ 109], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 121], 00:20:23.790 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 138], 99.95th=[ 138], 00:20:23.790 | 99.99th=[ 138] 00:20:23.790 bw ( KiB/s): min= 632, max= 976, per=4.21%, avg=711.68, stdev=102.69, samples=19 00:20:23.790 iops : min= 158, max= 244, avg=177.89, stdev=25.63, samples=19 00:20:23.790 lat (msec) : 50=2.86%, 100=56.56%, 250=40.58% 00:20:23.790 cpu : usr=39.45%, sys=2.22%, ctx=1370, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.8%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=88.3%, 8=10.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95273: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=174, BW=699KiB/s (716kB/s)(7004KiB/10022msec) 00:20:23.790 slat (usec): min=4, max=4026, avg=16.65, stdev=96.01 00:20:23.790 clat (msec): min=38, max=144, avg=91.49, stdev=22.39 00:20:23.790 lat (msec): min=38, max=144, avg=91.51, stdev=22.38 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 67], 20.00th=[ 72], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 106], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 122], 00:20:23.790 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:20:23.790 | 99.99th=[ 144] 00:20:23.790 bw ( KiB/s): min= 512, max= 920, per=4.10%, avg=693.95, stdev=94.20, samples=20 00:20:23.790 iops : min= 128, max= 230, avg=173.45, stdev=23.55, samples=20 00:20:23.790 lat (msec) : 50=2.68%, 100=54.83%, 250=42.49% 00:20:23.790 cpu : usr=33.48%, sys=1.81%, ctx=1271, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.4%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1751,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95274: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=179, BW=720KiB/s (737kB/s)(7212KiB/10022msec) 00:20:23.790 slat (usec): min=4, max=8026, avg=24.08, stdev=231.11 00:20:23.790 clat (msec): min=40, max=152, avg=88.78, stdev=21.74 00:20:23.790 lat (msec): min=40, max=152, avg=88.81, stdev=21.74 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 47], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 70], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 83], 60.00th=[ 99], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.790 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 153], 99.95th=[ 153], 00:20:23.790 | 99.99th=[ 153] 00:20:23.790 bw ( KiB/s): min= 640, max= 944, per=4.23%, avg=714.75, stdev=86.13, samples=20 00:20:23.790 iops : min= 160, max= 236, avg=178.65, stdev=21.49, samples=20 00:20:23.790 lat (msec) : 50=2.44%, 100=58.07%, 250=39.49% 00:20:23.790 cpu : usr=41.38%, sys=2.24%, ctx=1183, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.3%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=87.9%, 8=11.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95275: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=184, BW=739KiB/s (757kB/s)(7404KiB/10013msec) 00:20:23.790 slat (usec): min=4, max=8030, avg=28.36, stdev=322.39 00:20:23.790 clat (msec): min=24, max=131, avg=86.41, stdev=23.60 00:20:23.790 lat (msec): min=24, max=131, avg=86.43, stdev=23.59 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 32], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 70], 00:20:23.790 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 90], 00:20:23.790 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.790 | 99.00th=[ 123], 99.50th=[ 126], 99.90th=[ 132], 99.95th=[ 132], 00:20:23.790 | 99.99th=[ 132] 00:20:23.790 bw ( KiB/s): min= 608, max= 1056, per=4.34%, avg=733.90, stdev=122.90, samples=20 00:20:23.790 iops : min= 152, max= 264, avg=183.45, stdev=30.70, samples=20 00:20:23.790 lat (msec) : 50=7.94%, 100=56.24%, 250=35.82% 00:20:23.790 cpu : usr=35.92%, sys=1.94%, ctx=1055, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.790 filename0: (groupid=0, jobs=1): err= 0: pid=95276: Tue Apr 23 03:05:01 2024 00:20:23.790 read: IOPS=173, BW=693KiB/s (710kB/s)(6956KiB/10038msec) 00:20:23.790 slat (usec): min=4, max=12027, avg=35.25, stdev=418.34 00:20:23.790 clat (msec): min=41, max=168, avg=92.16, stdev=22.71 00:20:23.790 lat (msec): min=41, max=168, avg=92.19, stdev=22.72 00:20:23.790 clat percentiles (msec): 00:20:23.790 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 66], 20.00th=[ 72], 00:20:23.790 | 30.00th=[ 74], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 107], 00:20:23.790 | 70.00th=[ 110], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 122], 00:20:23.790 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 169], 99.95th=[ 169], 00:20:23.790 | 99.99th=[ 169] 00:20:23.790 bw ( KiB/s): min= 512, max= 920, per=4.08%, avg=689.20, stdev=116.78, samples=20 00:20:23.790 iops : min= 128, max= 230, avg=172.30, stdev=29.19, samples=20 00:20:23.790 lat (msec) : 50=2.42%, 100=52.62%, 250=44.97% 00:20:23.790 cpu : usr=38.11%, sys=2.14%, ctx=1056, majf=0, minf=9 00:20:23.790 IO depths : 1=0.1%, 2=1.5%, 4=5.9%, 8=77.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:23.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.790 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95277: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=175, BW=703KiB/s (720kB/s)(7060KiB/10038msec) 00:20:23.791 slat (usec): min=4, max=5487, avg=19.17, stdev=161.53 00:20:23.791 clat (msec): min=38, max=149, avg=90.87, stdev=23.59 00:20:23.791 lat (msec): min=38, max=149, avg=90.89, stdev=23.59 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 47], 5.00th=[ 52], 10.00th=[ 65], 20.00th=[ 70], 00:20:23.791 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 87], 60.00th=[ 106], 00:20:23.791 | 70.00th=[ 110], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 122], 00:20:23.791 | 99.00th=[ 142], 99.50th=[ 150], 99.90th=[ 150], 99.95th=[ 150], 00:20:23.791 | 99.99th=[ 150] 00:20:23.791 bw ( KiB/s): min= 512, max= 1008, per=4.14%, avg=699.60, stdev=129.06, samples=20 00:20:23.791 iops : min= 128, max= 252, avg=174.90, stdev=32.27, samples=20 00:20:23.791 lat (msec) : 50=3.68%, 100=50.93%, 250=45.38% 00:20:23.791 cpu : usr=41.62%, sys=2.65%, ctx=1193, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95278: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=176, BW=706KiB/s (723kB/s)(7080KiB/10023msec) 00:20:23.791 slat (usec): min=4, max=8017, avg=21.02, stdev=212.81 00:20:23.791 clat (msec): min=38, max=143, avg=90.45, stdev=21.69 00:20:23.791 lat (msec): min=38, max=143, avg=90.48, stdev=21.68 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 67], 20.00th=[ 72], 00:20:23.791 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 99], 00:20:23.791 | 70.00th=[ 108], 80.00th=[ 116], 90.00th=[ 121], 95.00th=[ 121], 00:20:23.791 | 99.00th=[ 129], 99.50th=[ 131], 99.90th=[ 142], 99.95th=[ 144], 00:20:23.791 | 99.99th=[ 144] 00:20:23.791 bw ( KiB/s): min= 624, max= 912, per=4.15%, avg=701.50, stdev=84.02, samples=20 00:20:23.791 iops : min= 156, max= 228, avg=175.35, stdev=21.02, samples=20 00:20:23.791 lat (msec) : 50=2.71%, 100=57.57%, 250=39.72% 00:20:23.791 cpu : usr=31.38%, sys=1.60%, ctx=904, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=88.4%, 8=10.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95279: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=175, BW=703KiB/s (720kB/s)(7076KiB/10070msec) 00:20:23.791 slat (usec): min=6, max=4021, avg=15.83, stdev=95.40 00:20:23.791 clat (msec): min=3, max=156, avg=90.84, stdev=26.59 00:20:23.791 lat (msec): min=3, max=156, avg=90.86, stdev=26.60 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 8], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 70], 00:20:23.791 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 96], 60.00th=[ 108], 00:20:23.791 | 70.00th=[ 110], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 124], 00:20:23.791 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:20:23.791 | 99.99th=[ 157] 00:20:23.791 bw ( KiB/s): min= 560, max= 1269, per=4.14%, avg=700.65, stdev=169.56, samples=20 00:20:23.791 iops : min= 140, max= 317, avg=175.15, stdev=42.35, samples=20 00:20:23.791 lat (msec) : 4=0.11%, 10=1.70%, 50=4.52%, 100=45.62%, 250=48.05% 00:20:23.791 cpu : usr=38.52%, sys=2.39%, ctx=1296, majf=0, minf=0 00:20:23.791 IO depths : 1=0.1%, 2=1.2%, 4=4.7%, 8=77.9%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=88.9%, 8=10.1%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1769,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95280: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=167, BW=671KiB/s (687kB/s)(6716KiB/10007msec) 00:20:23.791 slat (usec): min=4, max=8031, avg=28.71, stdev=309.03 00:20:23.791 clat (msec): min=41, max=166, avg=95.15, stdev=23.40 00:20:23.791 lat (msec): min=42, max=166, avg=95.18, stdev=23.39 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 72], 00:20:23.791 | 30.00th=[ 78], 40.00th=[ 84], 50.00th=[ 102], 60.00th=[ 110], 00:20:23.791 | 70.00th=[ 112], 80.00th=[ 117], 90.00th=[ 121], 95.00th=[ 123], 00:20:23.791 | 99.00th=[ 153], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 167], 00:20:23.791 | 99.99th=[ 167] 00:20:23.791 bw ( KiB/s): min= 496, max= 920, per=3.95%, avg=667.68, stdev=130.89, samples=19 00:20:23.791 iops : min= 124, max= 230, avg=166.89, stdev=32.70, samples=19 00:20:23.791 lat (msec) : 50=1.91%, 100=46.69%, 250=51.40% 00:20:23.791 cpu : usr=40.62%, sys=2.41%, ctx=1159, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=3.1%, 4=12.3%, 8=70.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=90.3%, 8=6.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95281: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=182, BW=731KiB/s (749kB/s)(7316KiB/10003msec) 00:20:23.791 slat (usec): min=4, max=8026, avg=18.36, stdev=187.43 00:20:23.791 clat (msec): min=8, max=160, avg=87.42, stdev=23.43 00:20:23.791 lat (msec): min=8, max=160, avg=87.44, stdev=23.43 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 27], 5.00th=[ 49], 10.00th=[ 61], 20.00th=[ 72], 00:20:23.791 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 95], 00:20:23.791 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:20:23.791 | 99.00th=[ 130], 99.50th=[ 150], 99.90th=[ 161], 99.95th=[ 161], 00:20:23.791 | 99.99th=[ 161] 00:20:23.791 bw ( KiB/s): min= 640, max= 976, per=4.31%, avg=728.11, stdev=107.12, samples=19 00:20:23.791 iops : min= 160, max= 244, avg=182.00, stdev=26.78, samples=19 00:20:23.791 lat (msec) : 10=0.38%, 50=5.63%, 100=58.06%, 250=35.92% 00:20:23.791 cpu : usr=31.20%, sys=1.79%, ctx=896, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=0.3%, 4=1.4%, 8=82.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=87.0%, 8=12.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1829,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95282: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=176, BW=707KiB/s (724kB/s)(7104KiB/10051msec) 00:20:23.791 slat (usec): min=3, max=10029, avg=39.26, stdev=468.42 00:20:23.791 clat (msec): min=9, max=148, avg=90.37, stdev=25.62 00:20:23.791 lat (msec): min=9, max=148, avg=90.41, stdev=25.63 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 12], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 70], 00:20:23.791 | 30.00th=[ 72], 40.00th=[ 82], 50.00th=[ 95], 60.00th=[ 106], 00:20:23.791 | 70.00th=[ 110], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 123], 00:20:23.791 | 99.00th=[ 129], 99.50th=[ 138], 99.90th=[ 146], 99.95th=[ 148], 00:20:23.791 | 99.99th=[ 148] 00:20:23.791 bw ( KiB/s): min= 584, max= 1264, per=4.16%, avg=704.00, stdev=167.91, samples=20 00:20:23.791 iops : min= 146, max= 316, avg=176.00, stdev=41.98, samples=20 00:20:23.791 lat (msec) : 10=0.79%, 20=0.90%, 50=4.84%, 100=46.90%, 250=46.57% 00:20:23.791 cpu : usr=31.23%, sys=1.87%, ctx=1105, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=0.3%, 4=0.8%, 8=81.9%, 16=16.8%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=88.0%, 8=11.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95283: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=184, BW=737KiB/s (755kB/s)(7380KiB/10009msec) 00:20:23.791 slat (usec): min=4, max=8024, avg=22.01, stdev=208.54 00:20:23.791 clat (msec): min=26, max=127, avg=86.70, stdev=22.99 00:20:23.791 lat (msec): min=26, max=127, avg=86.72, stdev=23.00 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 71], 00:20:23.791 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 82], 60.00th=[ 90], 00:20:23.791 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.791 | 99.00th=[ 125], 99.50th=[ 126], 99.90th=[ 128], 99.95th=[ 128], 00:20:23.791 | 99.99th=[ 128] 00:20:23.791 bw ( KiB/s): min= 664, max= 944, per=4.36%, avg=737.37, stdev=107.61, samples=19 00:20:23.791 iops : min= 166, max= 236, avg=184.32, stdev=26.85, samples=19 00:20:23.791 lat (msec) : 50=6.12%, 100=57.89%, 250=35.99% 00:20:23.791 cpu : usr=39.64%, sys=2.32%, ctx=992, majf=0, minf=9 00:20:23.791 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:23.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.791 issued rwts: total=1845,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.791 filename1: (groupid=0, jobs=1): err= 0: pid=95284: Tue Apr 23 03:05:01 2024 00:20:23.791 read: IOPS=179, BW=718KiB/s (735kB/s)(7216KiB/10052msec) 00:20:23.791 slat (usec): min=4, max=4020, avg=15.98, stdev=94.49 00:20:23.791 clat (msec): min=3, max=148, avg=89.06, stdev=26.38 00:20:23.791 lat (msec): min=3, max=148, avg=89.08, stdev=26.38 00:20:23.791 clat percentiles (msec): 00:20:23.791 | 1.00th=[ 6], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 71], 00:20:23.791 | 30.00th=[ 73], 40.00th=[ 81], 50.00th=[ 89], 60.00th=[ 105], 00:20:23.791 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 121], 95.00th=[ 121], 00:20:23.791 | 99.00th=[ 125], 99.50th=[ 132], 99.90th=[ 148], 99.95th=[ 148], 00:20:23.791 | 99.99th=[ 148] 00:20:23.791 bw ( KiB/s): min= 592, max= 1392, per=4.23%, avg=715.20, stdev=182.82, samples=20 00:20:23.791 iops : min= 148, max= 348, avg=178.80, stdev=45.71, samples=20 00:20:23.791 lat (msec) : 4=0.11%, 10=2.55%, 50=3.71%, 100=49.17%, 250=44.46% 00:20:23.792 cpu : usr=38.09%, sys=1.90%, ctx=1119, majf=0, minf=0 00:20:23.792 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95285: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=177, BW=709KiB/s (726kB/s)(7108KiB/10030msec) 00:20:23.792 slat (usec): min=4, max=4027, avg=21.27, stdev=145.54 00:20:23.792 clat (msec): min=40, max=153, avg=90.17, stdev=21.38 00:20:23.792 lat (msec): min=40, max=153, avg=90.19, stdev=21.38 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 47], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 72], 00:20:23.792 | 30.00th=[ 74], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 103], 00:20:23.792 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 121], 00:20:23.792 | 99.00th=[ 130], 99.50th=[ 131], 99.90th=[ 155], 99.95th=[ 155], 00:20:23.792 | 99.99th=[ 155] 00:20:23.792 bw ( KiB/s): min= 568, max= 920, per=4.17%, avg=704.40, stdev=93.20, samples=20 00:20:23.792 iops : min= 142, max= 230, avg=176.10, stdev=23.30, samples=20 00:20:23.792 lat (msec) : 50=2.19%, 100=57.51%, 250=40.29% 00:20:23.792 cpu : usr=43.39%, sys=2.47%, ctx=1259, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.5%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=88.4%, 8=10.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1777,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95286: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=182, BW=730KiB/s (748kB/s)(7340KiB/10050msec) 00:20:23.792 slat (usec): min=4, max=8029, avg=24.84, stdev=265.63 00:20:23.792 clat (msec): min=7, max=147, avg=87.51, stdev=25.13 00:20:23.792 lat (msec): min=7, max=147, avg=87.53, stdev=25.13 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 10], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 71], 00:20:23.792 | 30.00th=[ 72], 40.00th=[ 79], 50.00th=[ 85], 60.00th=[ 102], 00:20:23.792 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.792 | 99.00th=[ 126], 99.50th=[ 127], 99.90th=[ 144], 99.95th=[ 148], 00:20:23.792 | 99.99th=[ 148] 00:20:23.792 bw ( KiB/s): min= 616, max= 1218, per=4.30%, avg=727.70, stdev=153.02, samples=20 00:20:23.792 iops : min= 154, max= 304, avg=181.90, stdev=38.17, samples=20 00:20:23.792 lat (msec) : 10=1.74%, 50=4.90%, 100=53.30%, 250=40.05% 00:20:23.792 cpu : usr=41.72%, sys=2.45%, ctx=1203, majf=0, minf=9 00:20:23.792 IO depths : 1=0.2%, 2=0.6%, 4=1.9%, 8=81.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=87.7%, 8=11.9%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95287: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=147, BW=588KiB/s (602kB/s)(5888KiB/10013msec) 00:20:23.792 slat (usec): min=4, max=8028, avg=19.01, stdev=208.97 00:20:23.792 clat (msec): min=51, max=166, avg=108.62, stdev=22.49 00:20:23.792 lat (msec): min=51, max=166, avg=108.64, stdev=22.49 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 62], 5.00th=[ 68], 10.00th=[ 73], 20.00th=[ 87], 00:20:23.792 | 30.00th=[ 104], 40.00th=[ 110], 50.00th=[ 113], 60.00th=[ 115], 00:20:23.792 | 70.00th=[ 120], 80.00th=[ 121], 90.00th=[ 140], 95.00th=[ 146], 00:20:23.792 | 99.00th=[ 155], 99.50th=[ 159], 99.90th=[ 167], 99.95th=[ 167], 00:20:23.792 | 99.99th=[ 167] 00:20:23.792 bw ( KiB/s): min= 496, max= 878, per=3.47%, avg=587.90, stdev=110.07, samples=20 00:20:23.792 iops : min= 124, max= 219, avg=146.95, stdev=27.45, samples=20 00:20:23.792 lat (msec) : 100=27.85%, 250=72.15% 00:20:23.792 cpu : usr=42.13%, sys=2.34%, ctx=1403, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=6.2%, 4=24.9%, 8=56.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95288: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=179, BW=716KiB/s (733kB/s)(7188KiB/10037msec) 00:20:23.792 slat (usec): min=3, max=4026, avg=15.83, stdev=94.79 00:20:23.792 clat (msec): min=32, max=152, avg=89.29, stdev=22.82 00:20:23.792 lat (msec): min=32, max=152, avg=89.30, stdev=22.83 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 38], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 71], 00:20:23.792 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 85], 60.00th=[ 101], 00:20:23.792 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 118], 95.00th=[ 122], 00:20:23.792 | 99.00th=[ 127], 99.50th=[ 129], 99.90th=[ 150], 99.95th=[ 153], 00:20:23.792 | 99.99th=[ 153] 00:20:23.792 bw ( KiB/s): min= 608, max= 1010, per=4.21%, avg=712.50, stdev=115.81, samples=20 00:20:23.792 iops : min= 152, max= 252, avg=178.10, stdev=28.88, samples=20 00:20:23.792 lat (msec) : 50=4.51%, 100=55.43%, 250=40.07% 00:20:23.792 cpu : usr=31.27%, sys=1.81%, ctx=1095, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=87.8%, 8=11.7%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1797,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95289: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=179, BW=718KiB/s (736kB/s)(7208KiB/10034msec) 00:20:23.792 slat (usec): min=4, max=8024, avg=18.54, stdev=188.76 00:20:23.792 clat (msec): min=32, max=143, avg=88.99, stdev=22.82 00:20:23.792 lat (msec): min=32, max=143, avg=89.00, stdev=22.82 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 37], 5.00th=[ 50], 10.00th=[ 62], 20.00th=[ 71], 00:20:23.792 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 101], 00:20:23.792 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.792 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 144], 99.95th=[ 144], 00:20:23.792 | 99.99th=[ 144] 00:20:23.792 bw ( KiB/s): min= 592, max= 968, per=4.23%, avg=714.40, stdev=106.48, samples=20 00:20:23.792 iops : min= 148, max= 242, avg=178.60, stdev=26.62, samples=20 00:20:23.792 lat (msec) : 50=5.16%, 100=55.11%, 250=39.73% 00:20:23.792 cpu : usr=36.62%, sys=2.03%, ctx=1116, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=82.9%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=87.3%, 8=12.5%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1802,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95290: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=166, BW=666KiB/s (682kB/s)(6688KiB/10036msec) 00:20:23.792 slat (usec): min=4, max=4035, avg=24.57, stdev=197.74 00:20:23.792 clat (msec): min=37, max=156, avg=95.85, stdev=21.95 00:20:23.792 lat (msec): min=37, max=156, avg=95.88, stdev=21.94 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 51], 5.00th=[ 62], 10.00th=[ 67], 20.00th=[ 72], 00:20:23.792 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 106], 60.00th=[ 109], 00:20:23.792 | 70.00th=[ 112], 80.00th=[ 116], 90.00th=[ 120], 95.00th=[ 122], 00:20:23.792 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 157], 99.95th=[ 157], 00:20:23.792 | 99.99th=[ 157] 00:20:23.792 bw ( KiB/s): min= 512, max= 1010, per=3.92%, avg=662.50, stdev=123.87, samples=20 00:20:23.792 iops : min= 128, max= 252, avg=165.60, stdev=30.89, samples=20 00:20:23.792 lat (msec) : 50=0.84%, 100=44.74%, 250=54.43% 00:20:23.792 cpu : usr=41.77%, sys=2.60%, ctx=1383, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=3.9%, 4=15.5%, 8=66.8%, 16=13.8%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=91.4%, 8=5.1%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95291: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=181, BW=728KiB/s (745kB/s)(7304KiB/10034msec) 00:20:23.792 slat (usec): min=4, max=8022, avg=24.70, stdev=267.50 00:20:23.792 clat (msec): min=23, max=143, avg=87.81, stdev=22.94 00:20:23.792 lat (msec): min=23, max=143, avg=87.83, stdev=22.94 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 36], 5.00th=[ 51], 10.00th=[ 61], 20.00th=[ 71], 00:20:23.792 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 84], 60.00th=[ 97], 00:20:23.792 | 70.00th=[ 108], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:20:23.792 | 99.00th=[ 125], 99.50th=[ 128], 99.90th=[ 130], 99.95th=[ 144], 00:20:23.792 | 99.99th=[ 144] 00:20:23.792 bw ( KiB/s): min= 616, max= 1032, per=4.28%, avg=724.00, stdev=115.30, samples=20 00:20:23.792 iops : min= 154, max= 258, avg=181.00, stdev=28.83, samples=20 00:20:23.792 lat (msec) : 50=5.09%, 100=56.30%, 250=38.61% 00:20:23.792 cpu : usr=35.01%, sys=1.96%, ctx=1089, majf=0, minf=9 00:20:23.792 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:23.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.792 issued rwts: total=1826,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.792 filename2: (groupid=0, jobs=1): err= 0: pid=95292: Tue Apr 23 03:05:01 2024 00:20:23.792 read: IOPS=175, BW=704KiB/s (720kB/s)(7048KiB/10017msec) 00:20:23.792 slat (usec): min=4, max=8038, avg=33.59, stdev=301.82 00:20:23.792 clat (msec): min=42, max=151, avg=90.75, stdev=22.17 00:20:23.792 lat (msec): min=42, max=151, avg=90.79, stdev=22.16 00:20:23.792 clat percentiles (msec): 00:20:23.792 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 72], 00:20:23.792 | 30.00th=[ 73], 40.00th=[ 80], 50.00th=[ 84], 60.00th=[ 105], 00:20:23.792 | 70.00th=[ 110], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 122], 00:20:23.792 | 99.00th=[ 140], 99.50th=[ 153], 99.90th=[ 153], 99.95th=[ 153], 00:20:23.793 | 99.99th=[ 153] 00:20:23.793 bw ( KiB/s): min= 512, max= 944, per=4.13%, avg=698.40, stdev=97.98, samples=20 00:20:23.793 iops : min= 128, max= 236, avg=174.60, stdev=24.50, samples=20 00:20:23.793 lat (msec) : 50=2.55%, 100=55.85%, 250=41.60% 00:20:23.793 cpu : usr=42.34%, sys=2.51%, ctx=1296, majf=0, minf=9 00:20:23.793 IO depths : 1=0.1%, 2=1.5%, 4=5.8%, 8=77.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:23.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.793 complete : 0=0.0%, 4=88.5%, 8=10.3%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:23.793 issued rwts: total=1762,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:23.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:23.793 00:20:23.793 Run status group 0 (all jobs): 00:20:23.793 READ: bw=16.5MiB/s (17.3MB/s), 588KiB/s-739KiB/s (602kB/s-757kB/s), io=166MiB (174MB), run=10003-10070msec 00:20:23.793 03:05:01 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:23.793 03:05:01 -- target/dif.sh@43 -- # local sub 00:20:23.793 03:05:01 -- target/dif.sh@45 -- # for sub in "$@" 00:20:23.793 03:05:01 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:23.793 03:05:01 -- target/dif.sh@36 -- # local sub_id=0 00:20:23.793 03:05:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@45 -- # for sub in "$@" 00:20:23.793 03:05:01 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:23.793 03:05:01 -- target/dif.sh@36 -- # local sub_id=1 00:20:23.793 03:05:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@45 -- # for sub in "$@" 00:20:23.793 03:05:01 -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:23.793 03:05:01 -- target/dif.sh@36 -- # local sub_id=2 00:20:23.793 03:05:01 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # NULL_DIF=1 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # numjobs=2 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # iodepth=8 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # runtime=5 00:20:23.793 03:05:01 -- target/dif.sh@115 -- # files=1 00:20:23.793 03:05:01 -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:23.793 03:05:01 -- target/dif.sh@28 -- # local sub 00:20:23.793 03:05:01 -- target/dif.sh@30 -- # for sub in "$@" 00:20:23.793 03:05:01 -- target/dif.sh@31 -- # create_subsystem 0 00:20:23.793 03:05:01 -- target/dif.sh@18 -- # local sub_id=0 00:20:23.793 03:05:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 bdev_null0 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 [2024-04-23 03:05:01.543164] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@30 -- # for sub in "$@" 00:20:23.793 03:05:01 -- target/dif.sh@31 -- # create_subsystem 1 00:20:23.793 03:05:01 -- target/dif.sh@18 -- # local sub_id=1 00:20:23.793 03:05:01 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 bdev_null1 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:23.793 03:05:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:23.793 03:05:01 -- common/autotest_common.sh@10 -- # set +x 00:20:23.793 03:05:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:23.793 03:05:01 -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:23.793 03:05:01 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:23.793 03:05:01 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:23.793 03:05:01 -- nvmf/common.sh@521 -- # config=() 00:20:23.793 03:05:01 -- nvmf/common.sh@521 -- # local subsystem config 00:20:23.793 03:05:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.793 03:05:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.793 { 00:20:23.793 "params": { 00:20:23.793 "name": "Nvme$subsystem", 00:20:23.793 "trtype": "$TEST_TRANSPORT", 00:20:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.793 "adrfam": "ipv4", 00:20:23.793 "trsvcid": "$NVMF_PORT", 00:20:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.793 "hdgst": ${hdgst:-false}, 00:20:23.793 "ddgst": ${ddgst:-false} 00:20:23.793 }, 00:20:23.793 "method": "bdev_nvme_attach_controller" 00:20:23.793 } 00:20:23.793 EOF 00:20:23.793 )") 00:20:23.793 03:05:01 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:23.793 03:05:01 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:23.793 03:05:01 -- target/dif.sh@82 -- # gen_fio_conf 00:20:23.793 03:05:01 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:23.793 03:05:01 -- target/dif.sh@54 -- # local file 00:20:23.793 03:05:01 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:23.793 03:05:01 -- nvmf/common.sh@543 -- # cat 00:20:23.793 03:05:01 -- target/dif.sh@56 -- # cat 00:20:23.793 03:05:01 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:23.793 03:05:01 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:23.793 03:05:01 -- common/autotest_common.sh@1327 -- # shift 00:20:23.793 03:05:01 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:23.793 03:05:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.793 03:05:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:23.793 03:05:01 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:23.793 03:05:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:23.793 03:05:01 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:23.793 03:05:01 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:23.793 03:05:01 -- target/dif.sh@72 -- # (( file <= files )) 00:20:23.793 03:05:01 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:23.793 { 00:20:23.793 "params": { 00:20:23.793 "name": "Nvme$subsystem", 00:20:23.793 "trtype": "$TEST_TRANSPORT", 00:20:23.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:23.793 "adrfam": "ipv4", 00:20:23.793 "trsvcid": "$NVMF_PORT", 00:20:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:23.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:23.793 "hdgst": ${hdgst:-false}, 00:20:23.793 "ddgst": ${ddgst:-false} 00:20:23.793 }, 00:20:23.793 "method": "bdev_nvme_attach_controller" 00:20:23.793 } 00:20:23.793 EOF 00:20:23.793 )") 00:20:23.793 03:05:01 -- target/dif.sh@73 -- # cat 00:20:23.793 03:05:01 -- nvmf/common.sh@543 -- # cat 00:20:23.793 03:05:01 -- target/dif.sh@72 -- # (( file++ )) 00:20:23.793 03:05:01 -- target/dif.sh@72 -- # (( file <= files )) 00:20:23.793 03:05:01 -- nvmf/common.sh@545 -- # jq . 00:20:23.793 03:05:01 -- nvmf/common.sh@546 -- # IFS=, 00:20:23.793 03:05:01 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:23.793 "params": { 00:20:23.793 "name": "Nvme0", 00:20:23.793 "trtype": "tcp", 00:20:23.793 "traddr": "10.0.0.2", 00:20:23.793 "adrfam": "ipv4", 00:20:23.793 "trsvcid": "4420", 00:20:23.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:23.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:23.793 "hdgst": false, 00:20:23.793 "ddgst": false 00:20:23.793 }, 00:20:23.793 "method": "bdev_nvme_attach_controller" 00:20:23.794 },{ 00:20:23.794 "params": { 00:20:23.794 "name": "Nvme1", 00:20:23.794 "trtype": "tcp", 00:20:23.794 "traddr": "10.0.0.2", 00:20:23.794 "adrfam": "ipv4", 00:20:23.794 "trsvcid": "4420", 00:20:23.794 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:23.794 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:23.794 "hdgst": false, 00:20:23.794 "ddgst": false 00:20:23.794 }, 00:20:23.794 "method": "bdev_nvme_attach_controller" 00:20:23.794 }' 00:20:23.794 03:05:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:23.794 03:05:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:23.794 03:05:01 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:23.794 03:05:01 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:23.794 03:05:01 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:23.794 03:05:01 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:23.794 03:05:01 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:23.794 03:05:01 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:23.794 03:05:01 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:23.794 03:05:01 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:23.794 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:23.794 ... 00:20:23.794 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:23.794 ... 00:20:23.794 fio-3.35 00:20:23.794 Starting 4 threads 00:20:29.067 00:20:29.067 filename0: (groupid=0, jobs=1): err= 0: pid=95445: Tue Apr 23 03:05:07 2024 00:20:29.067 read: IOPS=2140, BW=16.7MiB/s (17.5MB/s)(83.6MiB/5002msec) 00:20:29.067 slat (nsec): min=6706, max=46042, avg=12339.36, stdev=4455.22 00:20:29.067 clat (usec): min=633, max=7322, avg=3697.86, stdev=954.09 00:20:29.067 lat (usec): min=642, max=7337, avg=3710.20, stdev=954.69 00:20:29.067 clat percentiles (usec): 00:20:29.067 | 1.00th=[ 1434], 5.00th=[ 1516], 10.00th=[ 1631], 20.00th=[ 3163], 00:20:29.067 | 30.00th=[ 3556], 40.00th=[ 3720], 50.00th=[ 4015], 60.00th=[ 4178], 00:20:29.067 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 4752], 00:20:29.067 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5473], 99.95th=[ 5604], 00:20:29.067 | 99.99th=[ 7308] 00:20:29.068 bw ( KiB/s): min=14208, max=19264, per=26.84%, avg=17153.56, stdev=1888.68, samples=9 00:20:29.068 iops : min= 1776, max= 2408, avg=2144.11, stdev=236.12, samples=9 00:20:29.068 lat (usec) : 750=0.13%, 1000=0.16% 00:20:29.068 lat (msec) : 2=10.88%, 4=38.54%, 10=50.28% 00:20:29.068 cpu : usr=92.06%, sys=7.00%, ctx=14, majf=0, minf=0 00:20:29.068 IO depths : 1=0.1%, 2=11.3%, 4=58.2%, 8=30.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 issued rwts: total=10707,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.068 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:29.068 filename0: (groupid=0, jobs=1): err= 0: pid=95446: Tue Apr 23 03:05:07 2024 00:20:29.068 read: IOPS=1943, BW=15.2MiB/s (15.9MB/s)(75.9MiB/5002msec) 00:20:29.068 slat (nsec): min=3713, max=55810, avg=15022.28, stdev=4658.24 00:20:29.068 clat (usec): min=1331, max=7341, avg=4064.13, stdev=815.34 00:20:29.068 lat (usec): min=1345, max=7360, avg=4079.15, stdev=815.49 00:20:29.068 clat percentiles (usec): 00:20:29.068 | 1.00th=[ 2024], 5.00th=[ 2278], 10.00th=[ 2737], 20.00th=[ 3654], 00:20:29.068 | 30.00th=[ 3785], 40.00th=[ 4047], 50.00th=[ 4228], 60.00th=[ 4359], 00:20:29.068 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4752], 95.00th=[ 5211], 00:20:29.068 | 99.00th=[ 6325], 99.50th=[ 6456], 99.90th=[ 6587], 99.95th=[ 6652], 00:20:29.068 | 99.99th=[ 7373] 00:20:29.068 bw ( KiB/s): min=12032, max=19472, per=24.08%, avg=15392.44, stdev=2299.88, samples=9 00:20:29.068 iops : min= 1504, max= 2434, avg=1924.00, stdev=287.41, samples=9 00:20:29.068 lat (msec) : 2=0.87%, 4=35.48%, 10=63.65% 00:20:29.068 cpu : usr=91.78%, sys=7.38%, ctx=57, majf=0, minf=0 00:20:29.068 IO depths : 1=0.1%, 2=18.4%, 4=54.0%, 8=27.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 issued rwts: total=9719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.068 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:29.068 filename1: (groupid=0, jobs=1): err= 0: pid=95447: Tue Apr 23 03:05:07 2024 00:20:29.068 read: IOPS=1880, BW=14.7MiB/s (15.4MB/s)(73.5MiB/5001msec) 00:20:29.068 slat (nsec): min=6572, max=59199, avg=15367.54, stdev=4797.01 00:20:29.068 clat (usec): min=975, max=7944, avg=4197.33, stdev=603.76 00:20:29.068 lat (usec): min=984, max=7972, avg=4212.70, stdev=603.72 00:20:29.068 clat percentiles (usec): 00:20:29.068 | 1.00th=[ 2147], 5.00th=[ 3163], 10.00th=[ 3589], 20.00th=[ 3720], 00:20:29.068 | 30.00th=[ 4015], 40.00th=[ 4178], 50.00th=[ 4293], 60.00th=[ 4359], 00:20:29.068 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5014], 00:20:29.068 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 6456], 99.95th=[ 6456], 00:20:29.068 | 99.99th=[ 7963] 00:20:29.068 bw ( KiB/s): min=13952, max=16447, per=23.21%, avg=14833.67, stdev=933.61, samples=9 00:20:29.068 iops : min= 1744, max= 2055, avg=1854.11, stdev=116.51, samples=9 00:20:29.068 lat (usec) : 1000=0.02% 00:20:29.068 lat (msec) : 2=0.90%, 4=28.59%, 10=70.49% 00:20:29.068 cpu : usr=91.18%, sys=7.98%, ctx=8, majf=0, minf=9 00:20:29.068 IO depths : 1=0.1%, 2=21.5%, 4=52.4%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 issued rwts: total=9402,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.068 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:29.068 filename1: (groupid=0, jobs=1): err= 0: pid=95448: Tue Apr 23 03:05:07 2024 00:20:29.068 read: IOPS=2025, BW=15.8MiB/s (16.6MB/s)(79.1MiB/5002msec) 00:20:29.068 slat (nsec): min=6875, max=64963, avg=14917.84, stdev=4874.42 00:20:29.068 clat (usec): min=995, max=7691, avg=3898.98, stdev=789.75 00:20:29.068 lat (usec): min=1004, max=7705, avg=3913.90, stdev=790.20 00:20:29.068 clat percentiles (usec): 00:20:29.068 | 1.00th=[ 1844], 5.00th=[ 2212], 10.00th=[ 2507], 20.00th=[ 3458], 00:20:29.068 | 30.00th=[ 3687], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4293], 00:20:29.068 | 70.00th=[ 4359], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:20:29.068 | 99.00th=[ 5145], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 5866], 00:20:29.068 | 99.99th=[ 7439] 00:20:29.068 bw ( KiB/s): min=13824, max=19472, per=25.23%, avg=16121.33, stdev=1944.58, samples=9 00:20:29.068 iops : min= 1728, max= 2434, avg=2015.11, stdev=243.01, samples=9 00:20:29.068 lat (usec) : 1000=0.01% 00:20:29.068 lat (msec) : 2=1.54%, 4=40.87%, 10=57.58% 00:20:29.068 cpu : usr=91.80%, sys=7.34%, ctx=6, majf=0, minf=9 00:20:29.068 IO depths : 1=0.1%, 2=15.5%, 4=55.9%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:29.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 complete : 0=0.0%, 4=94.0%, 8=6.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:29.068 issued rwts: total=10130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:29.068 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:29.068 00:20:29.068 Run status group 0 (all jobs): 00:20:29.068 READ: bw=62.4MiB/s (65.4MB/s), 14.7MiB/s-16.7MiB/s (15.4MB/s-17.5MB/s), io=312MiB (327MB), run=5001-5002msec 00:20:29.068 03:05:07 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:29.068 03:05:07 -- target/dif.sh@43 -- # local sub 00:20:29.068 03:05:07 -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.068 03:05:07 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:29.068 03:05:07 -- target/dif.sh@36 -- # local sub_id=0 00:20:29.068 03:05:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@45 -- # for sub in "$@" 00:20:29.068 03:05:07 -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:29.068 03:05:07 -- target/dif.sh@36 -- # local sub_id=1 00:20:29.068 03:05:07 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 ************************************ 00:20:29.068 END TEST fio_dif_rand_params 00:20:29.068 ************************************ 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 00:20:29.068 real 0m23.137s 00:20:29.068 user 2m3.984s 00:20:29.068 sys 0m8.524s 00:20:29.068 03:05:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:29.068 03:05:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:29.068 03:05:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 ************************************ 00:20:29.068 START TEST fio_dif_digest 00:20:29.068 ************************************ 00:20:29.068 03:05:07 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:20:29.068 03:05:07 -- target/dif.sh@123 -- # local NULL_DIF 00:20:29.068 03:05:07 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:29.068 03:05:07 -- target/dif.sh@125 -- # local hdgst ddgst 00:20:29.068 03:05:07 -- target/dif.sh@127 -- # NULL_DIF=3 00:20:29.068 03:05:07 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:29.068 03:05:07 -- target/dif.sh@127 -- # numjobs=3 00:20:29.068 03:05:07 -- target/dif.sh@127 -- # iodepth=3 00:20:29.068 03:05:07 -- target/dif.sh@127 -- # runtime=10 00:20:29.068 03:05:07 -- target/dif.sh@128 -- # hdgst=true 00:20:29.068 03:05:07 -- target/dif.sh@128 -- # ddgst=true 00:20:29.068 03:05:07 -- target/dif.sh@130 -- # create_subsystems 0 00:20:29.068 03:05:07 -- target/dif.sh@28 -- # local sub 00:20:29.068 03:05:07 -- target/dif.sh@30 -- # for sub in "$@" 00:20:29.068 03:05:07 -- target/dif.sh@31 -- # create_subsystem 0 00:20:29.068 03:05:07 -- target/dif.sh@18 -- # local sub_id=0 00:20:29.068 03:05:07 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 bdev_null0 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.068 03:05:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.068 03:05:07 -- common/autotest_common.sh@10 -- # set +x 00:20:29.068 [2024-04-23 03:05:07.664228] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.068 03:05:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.068 03:05:07 -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:29.068 03:05:07 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:29.068 03:05:07 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:29.068 03:05:07 -- nvmf/common.sh@521 -- # config=() 00:20:29.068 03:05:07 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.068 03:05:07 -- nvmf/common.sh@521 -- # local subsystem config 00:20:29.068 03:05:07 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.068 03:05:07 -- target/dif.sh@82 -- # gen_fio_conf 00:20:29.069 03:05:07 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:20:29.069 03:05:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:20:29.069 03:05:07 -- target/dif.sh@54 -- # local file 00:20:29.069 03:05:07 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:29.069 03:05:07 -- target/dif.sh@56 -- # cat 00:20:29.069 03:05:07 -- common/autotest_common.sh@1325 -- # local sanitizers 00:20:29.069 03:05:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:20:29.069 { 00:20:29.069 "params": { 00:20:29.069 "name": "Nvme$subsystem", 00:20:29.069 "trtype": "$TEST_TRANSPORT", 00:20:29.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:29.069 "adrfam": "ipv4", 00:20:29.069 "trsvcid": "$NVMF_PORT", 00:20:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:29.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:29.069 "hdgst": ${hdgst:-false}, 00:20:29.069 "ddgst": ${ddgst:-false} 00:20:29.069 }, 00:20:29.069 "method": "bdev_nvme_attach_controller" 00:20:29.069 } 00:20:29.069 EOF 00:20:29.069 )") 00:20:29.069 03:05:07 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.069 03:05:07 -- common/autotest_common.sh@1327 -- # shift 00:20:29.069 03:05:07 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:20:29.069 03:05:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.069 03:05:07 -- nvmf/common.sh@543 -- # cat 00:20:29.069 03:05:07 -- target/dif.sh@72 -- # (( file = 1 )) 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # grep libasan 00:20:29.069 03:05:07 -- target/dif.sh@72 -- # (( file <= files )) 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:29.069 03:05:07 -- nvmf/common.sh@545 -- # jq . 00:20:29.069 03:05:07 -- nvmf/common.sh@546 -- # IFS=, 00:20:29.069 03:05:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:20:29.069 "params": { 00:20:29.069 "name": "Nvme0", 00:20:29.069 "trtype": "tcp", 00:20:29.069 "traddr": "10.0.0.2", 00:20:29.069 "adrfam": "ipv4", 00:20:29.069 "trsvcid": "4420", 00:20:29.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:29.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:29.069 "hdgst": true, 00:20:29.069 "ddgst": true 00:20:29.069 }, 00:20:29.069 "method": "bdev_nvme_attach_controller" 00:20:29.069 }' 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:29.069 03:05:07 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:29.069 03:05:07 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:20:29.069 03:05:07 -- common/autotest_common.sh@1331 -- # asan_lib= 00:20:29.069 03:05:07 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:20:29.069 03:05:07 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:29.069 03:05:07 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:29.069 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:29.069 ... 00:20:29.069 fio-3.35 00:20:29.069 Starting 3 threads 00:20:41.277 00:20:41.277 filename0: (groupid=0, jobs=1): err= 0: pid=95560: Tue Apr 23 03:05:18 2024 00:20:41.277 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(280MiB/10002msec) 00:20:41.277 slat (nsec): min=7101, max=53880, avg=16764.28, stdev=5653.49 00:20:41.277 clat (usec): min=12009, max=15076, avg=13351.42, stdev=515.07 00:20:41.277 lat (usec): min=12019, max=15093, avg=13368.18, stdev=515.53 00:20:41.277 clat percentiles (usec): 00:20:41.277 | 1.00th=[12256], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:20:41.277 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:20:41.277 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:20:41.277 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:20:41.277 | 99.99th=[15139] 00:20:41.277 bw ( KiB/s): min=27648, max=29952, per=33.33%, avg=28661.58, stdev=682.47, samples=19 00:20:41.277 iops : min= 216, max= 234, avg=223.89, stdev= 5.31, samples=19 00:20:41.277 lat (msec) : 20=100.00% 00:20:41.277 cpu : usr=91.38%, sys=8.07%, ctx=13, majf=0, minf=9 00:20:41.277 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.277 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.277 filename0: (groupid=0, jobs=1): err= 0: pid=95561: Tue Apr 23 03:05:18 2024 00:20:41.277 read: IOPS=224, BW=28.0MiB/s (29.4MB/s)(280MiB/10003msec) 00:20:41.277 slat (nsec): min=6966, max=58896, avg=16535.95, stdev=5478.40 00:20:41.277 clat (usec): min=12013, max=15083, avg=13353.91, stdev=517.38 00:20:41.277 lat (usec): min=12021, max=15101, avg=13370.44, stdev=518.06 00:20:41.277 clat percentiles (usec): 00:20:41.277 | 1.00th=[12256], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:20:41.277 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:20:41.277 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:20:41.277 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15008], 99.95th=[15008], 00:20:41.277 | 99.99th=[15139] 00:20:41.277 bw ( KiB/s): min=27648, max=29952, per=33.32%, avg=28658.53, stdev=679.85, samples=19 00:20:41.277 iops : min= 216, max= 234, avg=223.89, stdev= 5.31, samples=19 00:20:41.277 lat (msec) : 20=100.00% 00:20:41.277 cpu : usr=91.97%, sys=7.44%, ctx=31, majf=0, minf=9 00:20:41.277 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.277 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.277 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.277 filename0: (groupid=0, jobs=1): err= 0: pid=95562: Tue Apr 23 03:05:18 2024 00:20:41.277 read: IOPS=223, BW=28.0MiB/s (29.4MB/s)(280MiB/10006msec) 00:20:41.277 slat (nsec): min=3196, max=56807, avg=16218.88, stdev=6445.11 00:20:41.277 clat (usec): min=11952, max=17398, avg=13358.15, stdev=536.58 00:20:41.277 lat (usec): min=11965, max=17418, avg=13374.37, stdev=536.89 00:20:41.277 clat percentiles (usec): 00:20:41.277 | 1.00th=[12256], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:20:41.277 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:20:41.277 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13960], 95.00th=[14091], 00:20:41.277 | 99.00th=[14353], 99.50th=[14484], 99.90th=[17433], 99.95th=[17433], 00:20:41.277 | 99.99th=[17433] 00:20:41.277 bw ( KiB/s): min=27648, max=29184, per=33.32%, avg=28655.42, stdev=677.45, samples=19 00:20:41.277 iops : min= 216, max= 228, avg=223.84, stdev= 5.27, samples=19 00:20:41.277 lat (msec) : 20=100.00% 00:20:41.278 cpu : usr=91.23%, sys=8.18%, ctx=10, majf=0, minf=0 00:20:41.278 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:41.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:41.278 issued rwts: total=2241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:41.278 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:41.278 00:20:41.278 Run status group 0 (all jobs): 00:20:41.278 READ: bw=84.0MiB/s (88.1MB/s), 28.0MiB/s-28.0MiB/s (29.4MB/s-29.4MB/s), io=840MiB (881MB), run=10002-10006msec 00:20:41.278 03:05:18 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:41.278 03:05:18 -- target/dif.sh@43 -- # local sub 00:20:41.278 03:05:18 -- target/dif.sh@45 -- # for sub in "$@" 00:20:41.278 03:05:18 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:41.278 03:05:18 -- target/dif.sh@36 -- # local sub_id=0 00:20:41.278 03:05:18 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:41.278 03:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.278 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:20:41.278 03:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.278 03:05:18 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:41.278 03:05:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:41.278 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:20:41.278 ************************************ 00:20:41.278 END TEST fio_dif_digest 00:20:41.278 ************************************ 00:20:41.278 03:05:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:41.278 00:20:41.278 real 0m10.843s 00:20:41.278 user 0m28.016s 00:20:41.278 sys 0m2.590s 00:20:41.278 03:05:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.278 03:05:18 -- common/autotest_common.sh@10 -- # set +x 00:20:41.278 03:05:18 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:41.278 03:05:18 -- target/dif.sh@147 -- # nvmftestfini 00:20:41.278 03:05:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:41.278 03:05:18 -- nvmf/common.sh@117 -- # sync 00:20:41.278 03:05:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:41.278 03:05:18 -- nvmf/common.sh@120 -- # set +e 00:20:41.278 03:05:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:41.278 03:05:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:41.278 rmmod nvme_tcp 00:20:41.278 rmmod nvme_fabrics 00:20:41.278 rmmod nvme_keyring 00:20:41.278 03:05:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:41.278 03:05:18 -- nvmf/common.sh@124 -- # set -e 00:20:41.278 03:05:18 -- nvmf/common.sh@125 -- # return 0 00:20:41.278 03:05:18 -- nvmf/common.sh@478 -- # '[' -n 94787 ']' 00:20:41.278 03:05:18 -- nvmf/common.sh@479 -- # killprocess 94787 00:20:41.278 03:05:18 -- common/autotest_common.sh@936 -- # '[' -z 94787 ']' 00:20:41.278 03:05:18 -- common/autotest_common.sh@940 -- # kill -0 94787 00:20:41.278 03:05:18 -- common/autotest_common.sh@941 -- # uname 00:20:41.278 03:05:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:41.278 03:05:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 94787 00:20:41.278 03:05:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:41.278 03:05:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:41.278 03:05:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 94787' 00:20:41.278 killing process with pid 94787 00:20:41.278 03:05:18 -- common/autotest_common.sh@955 -- # kill 94787 00:20:41.278 03:05:18 -- common/autotest_common.sh@960 -- # wait 94787 00:20:41.278 03:05:18 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:20:41.278 03:05:18 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:41.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.278 Waiting for block devices as requested 00:20:41.278 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.278 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.278 03:05:19 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:41.278 03:05:19 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.278 03:05:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.278 03:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:41.278 03:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.278 03:05:19 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:41.278 ************************************ 00:20:41.278 END TEST nvmf_dif 00:20:41.278 ************************************ 00:20:41.278 00:20:41.278 real 0m58.402s 00:20:41.278 user 3m45.851s 00:20:41.278 sys 0m19.563s 00:20:41.278 03:05:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.278 03:05:19 -- common/autotest_common.sh@10 -- # set +x 00:20:41.278 03:05:19 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:41.278 03:05:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:41.278 03:05:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.278 03:05:19 -- common/autotest_common.sh@10 -- # set +x 00:20:41.278 ************************************ 00:20:41.278 START TEST nvmf_abort_qd_sizes 00:20:41.278 ************************************ 00:20:41.278 03:05:19 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:41.278 * Looking for test storage... 00:20:41.278 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:41.278 03:05:19 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:41.278 03:05:19 -- nvmf/common.sh@7 -- # uname -s 00:20:41.278 03:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.278 03:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.278 03:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.278 03:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:41.278 03:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:41.278 03:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.278 03:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:41.278 03:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.278 03:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:41.278 03:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:20:41.278 03:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:20:41.278 03:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.278 03:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:41.278 03:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:41.278 03:05:19 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.278 03:05:19 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:41.278 03:05:19 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.278 03:05:19 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.278 03:05:19 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.278 03:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.278 03:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.278 03:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.278 03:05:19 -- paths/export.sh@5 -- # export PATH 00:20:41.278 03:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.278 03:05:19 -- nvmf/common.sh@47 -- # : 0 00:20:41.278 03:05:19 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:41.278 03:05:19 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:41.278 03:05:19 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:41.278 03:05:19 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.278 03:05:19 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.278 03:05:19 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:41.278 03:05:19 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:41.278 03:05:19 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:41.278 03:05:19 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:41.278 03:05:19 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:41.278 03:05:19 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.278 03:05:19 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:41.278 03:05:19 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:41.278 03:05:19 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:41.278 03:05:19 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.278 03:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:41.278 03:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.278 03:05:19 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:41.278 03:05:19 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:41.278 03:05:19 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.278 03:05:19 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.278 03:05:19 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:41.278 03:05:19 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:41.278 03:05:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:41.278 03:05:19 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:41.278 03:05:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:41.279 03:05:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.279 03:05:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:41.279 03:05:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:41.279 03:05:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:41.279 03:05:19 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:41.279 03:05:19 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:41.279 03:05:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:41.279 Cannot find device "nvmf_tgt_br" 00:20:41.279 03:05:19 -- nvmf/common.sh@155 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:41.279 Cannot find device "nvmf_tgt_br2" 00:20:41.279 03:05:19 -- nvmf/common.sh@156 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:41.279 03:05:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:41.279 Cannot find device "nvmf_tgt_br" 00:20:41.279 03:05:19 -- nvmf/common.sh@158 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:41.279 Cannot find device "nvmf_tgt_br2" 00:20:41.279 03:05:19 -- nvmf/common.sh@159 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:41.279 03:05:19 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:41.279 03:05:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:41.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.279 03:05:19 -- nvmf/common.sh@162 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:41.279 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:41.279 03:05:19 -- nvmf/common.sh@163 -- # true 00:20:41.279 03:05:19 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:41.279 03:05:19 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:41.279 03:05:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:41.279 03:05:19 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:41.279 03:05:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:41.279 03:05:19 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:41.279 03:05:19 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:41.279 03:05:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:41.279 03:05:19 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:41.279 03:05:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:41.279 03:05:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:41.279 03:05:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:41.279 03:05:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:41.279 03:05:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:41.279 03:05:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:41.279 03:05:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:41.279 03:05:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:41.279 03:05:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:41.279 03:05:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:41.279 03:05:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:41.279 03:05:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:41.279 03:05:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:41.279 03:05:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:41.279 03:05:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:41.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:41.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:20:41.279 00:20:41.279 --- 10.0.0.2 ping statistics --- 00:20:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.279 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:20:41.279 03:05:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:41.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:41.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:20:41.279 00:20:41.279 --- 10.0.0.3 ping statistics --- 00:20:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.279 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:41.279 03:05:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:41.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:41.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:20:41.279 00:20:41.279 --- 10.0.0.1 ping statistics --- 00:20:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:41.279 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:41.279 03:05:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:41.279 03:05:19 -- nvmf/common.sh@422 -- # return 0 00:20:41.279 03:05:19 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:20:41.279 03:05:19 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:41.536 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.794 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:41.794 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:41.794 03:05:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:41.794 03:05:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:41.794 03:05:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:41.794 03:05:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:41.794 03:05:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:41.794 03:05:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:41.794 03:05:20 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:41.794 03:05:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:41.794 03:05:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:41.794 03:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:41.794 03:05:20 -- nvmf/common.sh@470 -- # nvmfpid=96152 00:20:41.794 03:05:20 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:41.794 03:05:20 -- nvmf/common.sh@471 -- # waitforlisten 96152 00:20:41.794 03:05:20 -- common/autotest_common.sh@817 -- # '[' -z 96152 ']' 00:20:41.794 03:05:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:41.794 03:05:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:41.794 03:05:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:41.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:41.794 03:05:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:41.794 03:05:20 -- common/autotest_common.sh@10 -- # set +x 00:20:41.794 [2024-04-23 03:05:20.885787] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:20:41.794 [2024-04-23 03:05:20.885881] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.052 [2024-04-23 03:05:21.010027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:20:42.052 [2024-04-23 03:05:21.029352] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.052 [2024-04-23 03:05:21.071940] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.052 [2024-04-23 03:05:21.072016] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.052 [2024-04-23 03:05:21.072036] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.052 [2024-04-23 03:05:21.072046] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.052 [2024-04-23 03:05:21.072055] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.052 [2024-04-23 03:05:21.072211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.052 [2024-04-23 03:05:21.072975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.052 [2024-04-23 03:05:21.073352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.052 [2024-04-23 03:05:21.073362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.052 03:05:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:42.052 03:05:21 -- common/autotest_common.sh@850 -- # return 0 00:20:42.052 03:05:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:42.052 03:05:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:42.052 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.052 03:05:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.052 03:05:21 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:42.052 03:05:21 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:42.052 03:05:21 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:42.052 03:05:21 -- scripts/common.sh@309 -- # local bdf bdfs 00:20:42.052 03:05:21 -- scripts/common.sh@310 -- # local nvmes 00:20:42.052 03:05:21 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:20:42.052 03:05:21 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:42.052 03:05:21 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:20:42.052 03:05:21 -- scripts/common.sh@295 -- # local bdf= 00:20:42.052 03:05:21 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:20:42.052 03:05:21 -- scripts/common.sh@230 -- # local class 00:20:42.052 03:05:21 -- scripts/common.sh@231 -- # local subclass 00:20:42.052 03:05:21 -- scripts/common.sh@232 -- # local progif 00:20:42.052 03:05:21 -- scripts/common.sh@233 -- # printf %02x 1 00:20:42.052 03:05:21 -- scripts/common.sh@233 -- # class=01 00:20:42.052 03:05:21 -- scripts/common.sh@234 -- # printf %02x 8 00:20:42.052 03:05:21 -- scripts/common.sh@234 -- # subclass=08 00:20:42.052 03:05:21 -- scripts/common.sh@235 -- # printf %02x 2 00:20:42.052 03:05:21 -- scripts/common.sh@235 -- # progif=02 00:20:42.310 03:05:21 -- scripts/common.sh@237 -- # hash lspci 00:20:42.310 03:05:21 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:20:42.310 03:05:21 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:20:42.310 03:05:21 -- scripts/common.sh@240 -- # grep -i -- -p02 00:20:42.310 03:05:21 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:42.310 03:05:21 -- scripts/common.sh@242 -- # tr -d '"' 00:20:42.310 03:05:21 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:42.310 03:05:21 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:20:42.310 03:05:21 -- scripts/common.sh@15 -- # local i 00:20:42.310 03:05:21 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:20:42.310 03:05:21 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:42.310 03:05:21 -- scripts/common.sh@24 -- # return 0 00:20:42.310 03:05:21 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:20:42.310 03:05:21 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:42.310 03:05:21 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:20:42.310 03:05:21 -- scripts/common.sh@15 -- # local i 00:20:42.310 03:05:21 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:20:42.310 03:05:21 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:42.310 03:05:21 -- scripts/common.sh@24 -- # return 0 00:20:42.310 03:05:21 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:20:42.310 03:05:21 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:42.310 03:05:21 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:42.310 03:05:21 -- scripts/common.sh@320 -- # uname -s 00:20:42.310 03:05:21 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:42.310 03:05:21 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:42.310 03:05:21 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:20:42.310 03:05:21 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:42.310 03:05:21 -- scripts/common.sh@320 -- # uname -s 00:20:42.310 03:05:21 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:20:42.310 03:05:21 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:20:42.310 03:05:21 -- scripts/common.sh@325 -- # (( 2 )) 00:20:42.310 03:05:21 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:42.310 03:05:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:42.310 03:05:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.310 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.310 ************************************ 00:20:42.310 START TEST spdk_target_abort 00:20:42.310 ************************************ 00:20:42.310 03:05:21 -- common/autotest_common.sh@1111 -- # spdk_target 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:42.310 03:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.310 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.310 spdk_targetn1 00:20:42.310 03:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:42.310 03:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.310 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.310 [2024-04-23 03:05:21.387276] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.310 03:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.310 03:05:21 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:42.310 03:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.310 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.310 03:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:42.311 03:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.311 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.311 03:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:20:42.311 03:05:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:42.311 03:05:21 -- common/autotest_common.sh@10 -- # set +x 00:20:42.311 [2024-04-23 03:05:21.423453] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.311 03:05:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:42.311 03:05:21 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:45.619 Initializing NVMe Controllers 00:20:45.619 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:45.619 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:45.619 Initialization complete. Launching workers. 00:20:45.619 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10027, failed: 0 00:20:45.619 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1051, failed to submit 8976 00:20:45.619 success 763, unsuccess 288, failed 0 00:20:45.619 03:05:24 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:45.619 03:05:24 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:48.906 Initializing NVMe Controllers 00:20:48.906 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:48.906 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:48.906 Initialization complete. Launching workers. 00:20:48.906 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9059, failed: 0 00:20:48.906 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1184, failed to submit 7875 00:20:48.906 success 377, unsuccess 807, failed 0 00:20:48.906 03:05:28 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:48.906 03:05:28 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:52.193 Initializing NVMe Controllers 00:20:52.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:20:52.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:52.193 Initialization complete. Launching workers. 00:20:52.193 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30725, failed: 0 00:20:52.193 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2380, failed to submit 28345 00:20:52.193 success 425, unsuccess 1955, failed 0 00:20:52.193 03:05:31 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:52.193 03:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.193 03:05:31 -- common/autotest_common.sh@10 -- # set +x 00:20:52.193 03:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.193 03:05:31 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:52.193 03:05:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:52.193 03:05:31 -- common/autotest_common.sh@10 -- # set +x 00:20:52.453 03:05:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:52.453 03:05:31 -- target/abort_qd_sizes.sh@61 -- # killprocess 96152 00:20:52.453 03:05:31 -- common/autotest_common.sh@936 -- # '[' -z 96152 ']' 00:20:52.453 03:05:31 -- common/autotest_common.sh@940 -- # kill -0 96152 00:20:52.453 03:05:31 -- common/autotest_common.sh@941 -- # uname 00:20:52.453 03:05:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:52.453 03:05:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 96152 00:20:52.453 killing process with pid 96152 00:20:52.453 03:05:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:52.453 03:05:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:52.453 03:05:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 96152' 00:20:52.453 03:05:31 -- common/autotest_common.sh@955 -- # kill 96152 00:20:52.453 03:05:31 -- common/autotest_common.sh@960 -- # wait 96152 00:20:52.712 00:20:52.712 real 0m10.384s 00:20:52.712 user 0m39.919s 00:20:52.712 sys 0m2.081s 00:20:52.712 03:05:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:52.712 ************************************ 00:20:52.712 END TEST spdk_target_abort 00:20:52.712 03:05:31 -- common/autotest_common.sh@10 -- # set +x 00:20:52.712 ************************************ 00:20:52.712 03:05:31 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:52.712 03:05:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:20:52.712 03:05:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:52.712 03:05:31 -- common/autotest_common.sh@10 -- # set +x 00:20:52.712 ************************************ 00:20:52.712 START TEST kernel_target_abort 00:20:52.712 ************************************ 00:20:52.712 03:05:31 -- common/autotest_common.sh@1111 -- # kernel_target 00:20:52.712 03:05:31 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:52.712 03:05:31 -- nvmf/common.sh@717 -- # local ip 00:20:52.712 03:05:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:20:52.712 03:05:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:20:52.712 03:05:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:52.712 03:05:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:52.712 03:05:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:20:52.712 03:05:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:52.712 03:05:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:20:52.712 03:05:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:20:52.712 03:05:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:20:52.712 03:05:31 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:52.712 03:05:31 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:52.712 03:05:31 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:20:52.712 03:05:31 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:52.712 03:05:31 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:52.712 03:05:31 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:52.712 03:05:31 -- nvmf/common.sh@628 -- # local block nvme 00:20:52.712 03:05:31 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:20:52.712 03:05:31 -- nvmf/common.sh@631 -- # modprobe nvmet 00:20:52.712 03:05:31 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:52.712 03:05:31 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:53.280 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:53.280 Waiting for block devices as requested 00:20:53.280 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.280 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:53.280 03:05:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:53.280 03:05:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:53.280 03:05:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:20:53.280 03:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:53.280 03:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:53.280 03:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:53.280 03:05:32 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:20:53.280 03:05:32 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:20:53.280 03:05:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:53.539 No valid GPT data, bailing 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # pt= 00:20:53.539 03:05:32 -- scripts/common.sh@392 -- # return 1 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:20:53.539 03:05:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:53.539 03:05:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:20:53.539 03:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:53.539 03:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:53.539 03:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:20:53.539 03:05:32 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:20:53.539 03:05:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:53.539 No valid GPT data, bailing 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # pt= 00:20:53.539 03:05:32 -- scripts/common.sh@392 -- # return 1 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:20:53.539 03:05:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:53.539 03:05:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:20:53.539 03:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:53.539 03:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:53.539 03:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:20:53.539 03:05:32 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:20:53.539 03:05:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:53.539 No valid GPT data, bailing 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:53.539 03:05:32 -- scripts/common.sh@391 -- # pt= 00:20:53.539 03:05:32 -- scripts/common.sh@392 -- # return 1 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:20:53.539 03:05:32 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:20:53.539 03:05:32 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:20:53.539 03:05:32 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:53.539 03:05:32 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:53.539 03:05:32 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:53.539 03:05:32 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:20:53.539 03:05:32 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:20:53.539 03:05:32 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:53.799 No valid GPT data, bailing 00:20:53.799 03:05:32 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:53.799 03:05:32 -- scripts/common.sh@391 -- # pt= 00:20:53.799 03:05:32 -- scripts/common.sh@392 -- # return 1 00:20:53.799 03:05:32 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:20:53.799 03:05:32 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:20:53.799 03:05:32 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:53.799 03:05:32 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:53.799 03:05:32 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:53.799 03:05:32 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:53.799 03:05:32 -- nvmf/common.sh@656 -- # echo 1 00:20:53.799 03:05:32 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:20:53.799 03:05:32 -- nvmf/common.sh@658 -- # echo 1 00:20:53.799 03:05:32 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:20:53.799 03:05:32 -- nvmf/common.sh@661 -- # echo tcp 00:20:53.799 03:05:32 -- nvmf/common.sh@662 -- # echo 4420 00:20:53.799 03:05:32 -- nvmf/common.sh@663 -- # echo ipv4 00:20:53.799 03:05:32 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:53.799 03:05:32 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 --hostid=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 -a 10.0.0.1 -t tcp -s 4420 00:20:53.799 00:20:53.799 Discovery Log Number of Records 2, Generation counter 2 00:20:53.799 =====Discovery Log Entry 0====== 00:20:53.799 trtype: tcp 00:20:53.799 adrfam: ipv4 00:20:53.799 subtype: current discovery subsystem 00:20:53.799 treq: not specified, sq flow control disable supported 00:20:53.799 portid: 1 00:20:53.799 trsvcid: 4420 00:20:53.799 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:53.799 traddr: 10.0.0.1 00:20:53.799 eflags: none 00:20:53.799 sectype: none 00:20:53.799 =====Discovery Log Entry 1====== 00:20:53.799 trtype: tcp 00:20:53.799 adrfam: ipv4 00:20:53.799 subtype: nvme subsystem 00:20:53.799 treq: not specified, sq flow control disable supported 00:20:53.799 portid: 1 00:20:53.799 trsvcid: 4420 00:20:53.799 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:53.799 traddr: 10.0.0.1 00:20:53.799 eflags: none 00:20:53.799 sectype: none 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:53.799 03:05:32 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:57.090 Initializing NVMe Controllers 00:20:57.090 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:57.090 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:57.090 Initialization complete. Launching workers. 00:20:57.090 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31565, failed: 0 00:20:57.090 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31565, failed to submit 0 00:20:57.090 success 0, unsuccess 31565, failed 0 00:20:57.090 03:05:35 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:57.090 03:05:35 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:00.374 Initializing NVMe Controllers 00:21:00.374 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:00.374 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:00.374 Initialization complete. Launching workers. 00:21:00.374 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 64206, failed: 0 00:21:00.374 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27236, failed to submit 36970 00:21:00.374 success 0, unsuccess 27236, failed 0 00:21:00.374 03:05:39 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:21:00.374 03:05:39 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:21:03.662 Initializing NVMe Controllers 00:21:03.662 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:21:03.662 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:21:03.662 Initialization complete. Launching workers. 00:21:03.662 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74552, failed: 0 00:21:03.662 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18598, failed to submit 55954 00:21:03.662 success 0, unsuccess 18598, failed 0 00:21:03.662 03:05:42 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:21:03.662 03:05:42 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:21:03.662 03:05:42 -- nvmf/common.sh@675 -- # echo 0 00:21:03.662 03:05:42 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:03.662 03:05:42 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:21:03.662 03:05:42 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:03.662 03:05:42 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:21:03.662 03:05:42 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:21:03.662 03:05:42 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:21:03.662 03:05:42 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:03.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:04.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:04.867 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:04.867 ************************************ 00:21:04.867 END TEST kernel_target_abort 00:21:04.867 ************************************ 00:21:04.867 00:21:04.867 real 0m12.161s 00:21:04.867 user 0m6.161s 00:21:04.867 sys 0m3.410s 00:21:04.868 03:05:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:04.868 03:05:43 -- common/autotest_common.sh@10 -- # set +x 00:21:04.868 03:05:44 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:04.868 03:05:44 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:21:04.868 03:05:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:04.868 03:05:44 -- nvmf/common.sh@117 -- # sync 00:21:05.126 03:05:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:05.126 03:05:44 -- nvmf/common.sh@120 -- # set +e 00:21:05.126 03:05:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:05.126 03:05:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:05.126 rmmod nvme_tcp 00:21:05.126 rmmod nvme_fabrics 00:21:05.126 rmmod nvme_keyring 00:21:05.126 03:05:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:05.126 03:05:44 -- nvmf/common.sh@124 -- # set -e 00:21:05.126 03:05:44 -- nvmf/common.sh@125 -- # return 0 00:21:05.126 03:05:44 -- nvmf/common.sh@478 -- # '[' -n 96152 ']' 00:21:05.126 03:05:44 -- nvmf/common.sh@479 -- # killprocess 96152 00:21:05.126 03:05:44 -- common/autotest_common.sh@936 -- # '[' -z 96152 ']' 00:21:05.126 Process with pid 96152 is not found 00:21:05.126 03:05:44 -- common/autotest_common.sh@940 -- # kill -0 96152 00:21:05.126 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (96152) - No such process 00:21:05.126 03:05:44 -- common/autotest_common.sh@963 -- # echo 'Process with pid 96152 is not found' 00:21:05.126 03:05:44 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:21:05.126 03:05:44 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:21:05.384 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:05.384 Waiting for block devices as requested 00:21:05.384 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.643 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:21:05.643 03:05:44 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:05.643 03:05:44 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:05.643 03:05:44 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:05.643 03:05:44 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:05.643 03:05:44 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.643 03:05:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:21:05.643 03:05:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.643 03:05:44 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:05.643 ************************************ 00:21:05.643 END TEST nvmf_abort_qd_sizes 00:21:05.643 ************************************ 00:21:05.643 00:21:05.643 real 0m25.223s 00:21:05.643 user 0m47.117s 00:21:05.643 sys 0m6.824s 00:21:05.643 03:05:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:05.643 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:05.643 03:05:44 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:05.643 03:05:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:05.643 03:05:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:05.643 03:05:44 -- common/autotest_common.sh@10 -- # set +x 00:21:05.902 ************************************ 00:21:05.902 START TEST keyring_file 00:21:05.902 ************************************ 00:21:05.902 03:05:44 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:21:05.902 * Looking for test storage... 00:21:05.902 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:05.902 03:05:44 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:05.902 03:05:44 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:05.902 03:05:44 -- nvmf/common.sh@7 -- # uname -s 00:21:05.902 03:05:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.902 03:05:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.902 03:05:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.902 03:05:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.902 03:05:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.902 03:05:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.902 03:05:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.902 03:05:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.902 03:05:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.902 03:05:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.902 03:05:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:21:05.902 03:05:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=c9af0bbc-4b1b-4430-a22a-f5c44d7e0298 00:21:05.902 03:05:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.902 03:05:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.902 03:05:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:05.902 03:05:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.902 03:05:44 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:05.902 03:05:44 -- scripts/common.sh@502 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.902 03:05:44 -- scripts/common.sh@510 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.902 03:05:44 -- scripts/common.sh@511 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.902 03:05:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.902 03:05:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.902 03:05:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.902 03:05:44 -- paths/export.sh@5 -- # export PATH 00:21:05.902 03:05:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.902 03:05:44 -- nvmf/common.sh@47 -- # : 0 00:21:05.902 03:05:44 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.902 03:05:44 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.902 03:05:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.902 03:05:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.902 03:05:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.902 03:05:44 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.902 03:05:44 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.902 03:05:44 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.902 03:05:44 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:05.902 03:05:44 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:05.902 03:05:44 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:05.902 03:05:44 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:21:05.902 03:05:44 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:21:05.902 03:05:44 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:21:05.902 03:05:44 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:05.902 03:05:44 -- keyring/common.sh@15 -- # local name key digest path 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # name=key0 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # digest=0 00:21:05.902 03:05:44 -- keyring/common.sh@18 -- # mktemp 00:21:05.902 03:05:44 -- keyring/common.sh@18 -- # path=/tmp/tmp.xvkktSDmc4 00:21:05.902 03:05:44 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:05.902 03:05:44 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:05.902 03:05:44 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:05.902 03:05:44 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:05.902 03:05:44 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:05.902 03:05:44 -- nvmf/common.sh@693 -- # digest=0 00:21:05.902 03:05:44 -- nvmf/common.sh@694 -- # python - 00:21:05.902 03:05:44 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.xvkktSDmc4 00:21:05.902 03:05:44 -- keyring/common.sh@23 -- # echo /tmp/tmp.xvkktSDmc4 00:21:05.902 03:05:44 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.xvkktSDmc4 00:21:05.902 03:05:44 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:21:05.902 03:05:44 -- keyring/common.sh@15 -- # local name key digest path 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # name=key1 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:05.902 03:05:44 -- keyring/common.sh@17 -- # digest=0 00:21:05.902 03:05:44 -- keyring/common.sh@18 -- # mktemp 00:21:05.902 03:05:45 -- keyring/common.sh@18 -- # path=/tmp/tmp.P9WyrFZehC 00:21:05.902 03:05:45 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:05.902 03:05:45 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:05.902 03:05:45 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:05.902 03:05:45 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:05.902 03:05:45 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:21:05.902 03:05:45 -- nvmf/common.sh@693 -- # digest=0 00:21:05.902 03:05:45 -- nvmf/common.sh@694 -- # python - 00:21:05.902 03:05:45 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.P9WyrFZehC 00:21:05.902 03:05:45 -- keyring/common.sh@23 -- # echo /tmp/tmp.P9WyrFZehC 00:21:05.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.902 03:05:45 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.P9WyrFZehC 00:21:05.902 03:05:45 -- keyring/file.sh@30 -- # tgtpid=97035 00:21:05.903 03:05:45 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:05.903 03:05:45 -- keyring/file.sh@32 -- # waitforlisten 97035 00:21:05.903 03:05:45 -- common/autotest_common.sh@817 -- # '[' -z 97035 ']' 00:21:05.903 03:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.903 03:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:05.903 03:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.903 03:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:05.903 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.161 [2024-04-23 03:05:45.122685] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:21:06.161 [2024-04-23 03:05:45.123016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97035 ] 00:21:06.161 [2024-04-23 03:05:45.245640] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:06.161 [2024-04-23 03:05:45.267437] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.161 [2024-04-23 03:05:45.311150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:06.420 03:05:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.420 03:05:45 -- common/autotest_common.sh@850 -- # return 0 00:21:06.420 03:05:45 -- keyring/file.sh@33 -- # rpc_cmd 00:21:06.420 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.420 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.420 [2024-04-23 03:05:45.484043] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.420 null0 00:21:06.420 [2024-04-23 03:05:45.515984] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:06.420 [2024-04-23 03:05:45.516405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:06.420 [2024-04-23 03:05:45.524009] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:06.420 03:05:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:06.420 03:05:45 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.420 03:05:45 -- common/autotest_common.sh@638 -- # local es=0 00:21:06.420 03:05:45 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.420 03:05:45 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:21:06.420 03:05:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:06.420 03:05:45 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:21:06.420 03:05:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:06.420 03:05:45 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:21:06.420 03:05:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:06.420 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.420 [2024-04-23 03:05:45.535999] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:21:06.420 { 00:21:06.420 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:21:06.420 "secure_channel": false, 00:21:06.420 "listen_address": { 00:21:06.420 "trtype": "tcp", 00:21:06.420 "traddr": "127.0.0.1", 00:21:06.420 "trsvcid": "4420" 00:21:06.420 }, 00:21:06.420 "method": "nvmf_subsystem_add_listener", 00:21:06.420 "req_id": 1 00:21:06.420 } 00:21:06.420 Got JSON-RPC error response 00:21:06.420 response: 00:21:06.420 { 00:21:06.420 "code": -32602, 00:21:06.420 "message": "Invalid parameters" 00:21:06.420 } 00:21:06.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:06.420 03:05:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:21:06.420 03:05:45 -- common/autotest_common.sh@641 -- # es=1 00:21:06.420 03:05:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:06.420 03:05:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:06.420 03:05:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:06.420 03:05:45 -- keyring/file.sh@46 -- # bperfpid=97043 00:21:06.420 03:05:45 -- keyring/file.sh@48 -- # waitforlisten 97043 /var/tmp/bperf.sock 00:21:06.420 03:05:45 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:21:06.420 03:05:45 -- common/autotest_common.sh@817 -- # '[' -z 97043 ']' 00:21:06.420 03:05:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:06.420 03:05:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:06.420 03:05:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:06.420 03:05:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:06.420 03:05:45 -- common/autotest_common.sh@10 -- # set +x 00:21:06.691 [2024-04-23 03:05:45.592721] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:21:06.691 [2024-04-23 03:05:45.592959] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97043 ] 00:21:06.691 [2024-04-23 03:05:45.716027] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:06.691 [2024-04-23 03:05:45.736741] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:06.692 [2024-04-23 03:05:45.779736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:06.971 03:05:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:06.971 03:05:45 -- common/autotest_common.sh@850 -- # return 0 00:21:06.971 03:05:45 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:06.971 03:05:45 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:06.971 03:05:46 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P9WyrFZehC 00:21:06.971 03:05:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P9WyrFZehC 00:21:07.538 03:05:46 -- keyring/file.sh@51 -- # get_key key0 00:21:07.538 03:05:46 -- keyring/file.sh@51 -- # jq -r .path 00:21:07.538 03:05:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.538 03:05:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.538 03:05:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.538 03:05:46 -- keyring/file.sh@51 -- # [[ /tmp/tmp.xvkktSDmc4 == \/\t\m\p\/\t\m\p\.\x\v\k\k\t\S\D\m\c\4 ]] 00:21:07.538 03:05:46 -- keyring/file.sh@52 -- # get_key key1 00:21:07.538 03:05:46 -- keyring/file.sh@52 -- # jq -r .path 00:21:07.538 03:05:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.538 03:05:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:07.538 03:05:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.797 03:05:46 -- keyring/file.sh@52 -- # [[ /tmp/tmp.P9WyrFZehC == \/\t\m\p\/\t\m\p\.\P\9\W\y\r\F\Z\e\h\C ]] 00:21:07.797 03:05:46 -- keyring/file.sh@53 -- # get_refcnt key0 00:21:07.797 03:05:46 -- keyring/common.sh@12 -- # get_key key0 00:21:07.797 03:05:46 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.797 03:05:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.797 03:05:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.797 03:05:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:08.056 03:05:47 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:21:08.056 03:05:47 -- keyring/file.sh@54 -- # get_refcnt key1 00:21:08.056 03:05:47 -- keyring/common.sh@12 -- # get_key key1 00:21:08.056 03:05:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.056 03:05:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.056 03:05:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:08.056 03:05:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.314 03:05:47 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:21:08.314 03:05:47 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.314 03:05:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.572 [2024-04-23 03:05:47.672814] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:08.830 nvme0n1 00:21:08.830 03:05:47 -- keyring/file.sh@59 -- # get_refcnt key0 00:21:08.830 03:05:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:08.830 03:05:47 -- keyring/common.sh@12 -- # get_key key0 00:21:08.830 03:05:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:08.830 03:05:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:08.830 03:05:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:09.087 03:05:47 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:21:09.088 03:05:47 -- keyring/file.sh@60 -- # get_refcnt key1 00:21:09.088 03:05:47 -- keyring/common.sh@12 -- # get_key key1 00:21:09.088 03:05:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:09.088 03:05:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:09.088 03:05:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:09.088 03:05:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:09.346 03:05:48 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:21:09.346 03:05:48 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:09.346 Running I/O for 1 seconds... 00:21:10.282 00:21:10.282 Latency(us) 00:21:10.282 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:10.282 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:10.282 nvme0n1 : 1.01 10613.27 41.46 0.00 0.00 12011.64 6762.12 20733.21 00:21:10.282 =================================================================================================================== 00:21:10.282 Total : 10613.27 41.46 0.00 0.00 12011.64 6762.12 20733.21 00:21:10.282 0 00:21:10.282 03:05:49 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:10.282 03:05:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:10.549 03:05:49 -- keyring/file.sh@65 -- # get_refcnt key0 00:21:10.550 03:05:49 -- keyring/common.sh@12 -- # get_key key0 00:21:10.550 03:05:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.550 03:05:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.550 03:05:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.550 03:05:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:10.811 03:05:49 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:21:10.811 03:05:49 -- keyring/file.sh@66 -- # get_refcnt key1 00:21:10.811 03:05:49 -- keyring/common.sh@12 -- # get_key key1 00:21:10.811 03:05:49 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.811 03:05:49 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.811 03:05:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.811 03:05:49 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.069 03:05:50 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:11.069 03:05:50 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.069 03:05:50 -- common/autotest_common.sh@638 -- # local es=0 00:21:11.069 03:05:50 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.069 03:05:50 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:21:11.069 03:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:11.069 03:05:50 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:21:11.069 03:05:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:11.069 03:05:50 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.069 03:05:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:11.328 [2024-04-23 03:05:50.402829] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:11.328 [2024-04-23 03:05:50.403440] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95dfe0 (107): Transport endpoint is not connected 00:21:11.328 [2024-04-23 03:05:50.404427] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95dfe0 (9): Bad file descriptor 00:21:11.329 [2024-04-23 03:05:50.405424] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:11.329 [2024-04-23 03:05:50.405444] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:11.329 [2024-04-23 03:05:50.405453] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:11.329 request: 00:21:11.329 { 00:21:11.329 "name": "nvme0", 00:21:11.329 "trtype": "tcp", 00:21:11.329 "traddr": "127.0.0.1", 00:21:11.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:11.329 "adrfam": "ipv4", 00:21:11.329 "trsvcid": "4420", 00:21:11.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:11.329 "psk": "key1", 00:21:11.329 "method": "bdev_nvme_attach_controller", 00:21:11.329 "req_id": 1 00:21:11.329 } 00:21:11.329 Got JSON-RPC error response 00:21:11.329 response: 00:21:11.329 { 00:21:11.329 "code": -32602, 00:21:11.329 "message": "Invalid parameters" 00:21:11.329 } 00:21:11.329 03:05:50 -- common/autotest_common.sh@641 -- # es=1 00:21:11.329 03:05:50 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:11.329 03:05:50 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:11.329 03:05:50 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:11.329 03:05:50 -- keyring/file.sh@71 -- # get_refcnt key0 00:21:11.329 03:05:50 -- keyring/common.sh@12 -- # get_key key0 00:21:11.329 03:05:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.329 03:05:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.329 03:05:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:11.329 03:05:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.587 03:05:50 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:21:11.587 03:05:50 -- keyring/file.sh@72 -- # get_refcnt key1 00:21:11.587 03:05:50 -- keyring/common.sh@12 -- # get_key key1 00:21:11.587 03:05:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:11.587 03:05:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:11.587 03:05:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:11.587 03:05:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.846 03:05:50 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:11.846 03:05:50 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:21:11.846 03:05:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:12.104 03:05:51 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:21:12.104 03:05:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:12.363 03:05:51 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:21:12.363 03:05:51 -- keyring/file.sh@77 -- # jq length 00:21:12.363 03:05:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:12.621 03:05:51 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:21:12.621 03:05:51 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.xvkktSDmc4 00:21:12.621 03:05:51 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:12.621 03:05:51 -- common/autotest_common.sh@638 -- # local es=0 00:21:12.621 03:05:51 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:12.621 03:05:51 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:21:12.621 03:05:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:12.621 03:05:51 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:21:12.621 03:05:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:12.621 03:05:51 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:12.621 03:05:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:12.881 [2024-04-23 03:05:51.939785] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xvkktSDmc4': 0100660 00:21:12.881 [2024-04-23 03:05:51.939842] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:12.881 request: 00:21:12.881 { 00:21:12.881 "name": "key0", 00:21:12.881 "path": "/tmp/tmp.xvkktSDmc4", 00:21:12.881 "method": "keyring_file_add_key", 00:21:12.881 "req_id": 1 00:21:12.881 } 00:21:12.881 Got JSON-RPC error response 00:21:12.881 response: 00:21:12.881 { 00:21:12.881 "code": -1, 00:21:12.881 "message": "Operation not permitted" 00:21:12.881 } 00:21:12.881 03:05:51 -- common/autotest_common.sh@641 -- # es=1 00:21:12.881 03:05:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:12.881 03:05:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:12.881 03:05:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:12.881 03:05:51 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.xvkktSDmc4 00:21:12.881 03:05:51 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:12.881 03:05:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.xvkktSDmc4 00:21:13.141 03:05:52 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.xvkktSDmc4 00:21:13.141 03:05:52 -- keyring/file.sh@88 -- # get_refcnt key0 00:21:13.141 03:05:52 -- keyring/common.sh@12 -- # get_key key0 00:21:13.141 03:05:52 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:13.141 03:05:52 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:13.141 03:05:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:13.141 03:05:52 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:13.400 03:05:52 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:21:13.400 03:05:52 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.400 03:05:52 -- common/autotest_common.sh@638 -- # local es=0 00:21:13.400 03:05:52 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.400 03:05:52 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:21:13.400 03:05:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:13.400 03:05:52 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:21:13.400 03:05:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:13.400 03:05:52 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.400 03:05:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:13.659 [2024-04-23 03:05:52.743987] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.xvkktSDmc4': No such file or directory 00:21:13.659 [2024-04-23 03:05:52.744049] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:13.659 [2024-04-23 03:05:52.744075] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:13.659 [2024-04-23 03:05:52.744084] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:13.659 [2024-04-23 03:05:52.744092] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:13.659 request: 00:21:13.659 { 00:21:13.659 "name": "nvme0", 00:21:13.659 "trtype": "tcp", 00:21:13.659 "traddr": "127.0.0.1", 00:21:13.659 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:13.659 "adrfam": "ipv4", 00:21:13.659 "trsvcid": "4420", 00:21:13.659 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:13.659 "psk": "key0", 00:21:13.659 "method": "bdev_nvme_attach_controller", 00:21:13.659 "req_id": 1 00:21:13.659 } 00:21:13.659 Got JSON-RPC error response 00:21:13.659 response: 00:21:13.659 { 00:21:13.659 "code": -19, 00:21:13.659 "message": "No such device" 00:21:13.659 } 00:21:13.659 03:05:52 -- common/autotest_common.sh@641 -- # es=1 00:21:13.659 03:05:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:13.659 03:05:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:13.659 03:05:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:13.659 03:05:52 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:21:13.659 03:05:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:13.918 03:05:53 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:13.918 03:05:53 -- keyring/common.sh@15 -- # local name key digest path 00:21:13.918 03:05:53 -- keyring/common.sh@17 -- # name=key0 00:21:13.918 03:05:53 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:13.918 03:05:53 -- keyring/common.sh@17 -- # digest=0 00:21:13.918 03:05:53 -- keyring/common.sh@18 -- # mktemp 00:21:13.918 03:05:53 -- keyring/common.sh@18 -- # path=/tmp/tmp.WNQiR4VTzP 00:21:13.918 03:05:53 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:13.918 03:05:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:13.918 03:05:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:21:13.918 03:05:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:21:13.918 03:05:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:21:13.918 03:05:53 -- nvmf/common.sh@693 -- # digest=0 00:21:13.918 03:05:53 -- nvmf/common.sh@694 -- # python - 00:21:14.177 03:05:53 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.WNQiR4VTzP 00:21:14.177 03:05:53 -- keyring/common.sh@23 -- # echo /tmp/tmp.WNQiR4VTzP 00:21:14.177 03:05:53 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.WNQiR4VTzP 00:21:14.177 03:05:53 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WNQiR4VTzP 00:21:14.177 03:05:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WNQiR4VTzP 00:21:14.435 03:05:53 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.435 03:05:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:14.694 nvme0n1 00:21:14.694 03:05:53 -- keyring/file.sh@99 -- # get_refcnt key0 00:21:14.694 03:05:53 -- keyring/common.sh@12 -- # get_key key0 00:21:14.694 03:05:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:14.694 03:05:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.694 03:05:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.694 03:05:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:14.953 03:05:53 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:21:14.953 03:05:53 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:21:14.953 03:05:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:15.211 03:05:54 -- keyring/file.sh@101 -- # get_key key0 00:21:15.211 03:05:54 -- keyring/file.sh@101 -- # jq -r .removed 00:21:15.211 03:05:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.211 03:05:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.211 03:05:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.470 03:05:54 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:21:15.470 03:05:54 -- keyring/file.sh@102 -- # get_refcnt key0 00:21:15.470 03:05:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:15.470 03:05:54 -- keyring/common.sh@12 -- # get_key key0 00:21:15.470 03:05:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:15.470 03:05:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.470 03:05:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:15.731 03:05:54 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:21:15.731 03:05:54 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:15.731 03:05:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:15.990 03:05:55 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:21:15.990 03:05:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:15.991 03:05:55 -- keyring/file.sh@104 -- # jq length 00:21:16.250 03:05:55 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:21:16.250 03:05:55 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.WNQiR4VTzP 00:21:16.250 03:05:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.WNQiR4VTzP 00:21:16.510 03:05:55 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.P9WyrFZehC 00:21:16.510 03:05:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.P9WyrFZehC 00:21:16.769 03:05:55 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:16.769 03:05:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:17.028 nvme0n1 00:21:17.028 03:05:56 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:21:17.028 03:05:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:17.596 03:05:56 -- keyring/file.sh@112 -- # config='{ 00:21:17.596 "subsystems": [ 00:21:17.596 { 00:21:17.596 "subsystem": "keyring", 00:21:17.596 "config": [ 00:21:17.596 { 00:21:17.596 "method": "keyring_file_add_key", 00:21:17.596 "params": { 00:21:17.596 "name": "key0", 00:21:17.596 "path": "/tmp/tmp.WNQiR4VTzP" 00:21:17.596 } 00:21:17.596 }, 00:21:17.596 { 00:21:17.596 "method": "keyring_file_add_key", 00:21:17.596 "params": { 00:21:17.596 "name": "key1", 00:21:17.596 "path": "/tmp/tmp.P9WyrFZehC" 00:21:17.596 } 00:21:17.596 } 00:21:17.596 ] 00:21:17.596 }, 00:21:17.596 { 00:21:17.596 "subsystem": "iobuf", 00:21:17.596 "config": [ 00:21:17.596 { 00:21:17.596 "method": "iobuf_set_options", 00:21:17.596 "params": { 00:21:17.596 "small_pool_count": 8192, 00:21:17.596 "large_pool_count": 1024, 00:21:17.596 "small_bufsize": 8192, 00:21:17.596 "large_bufsize": 135168 00:21:17.596 } 00:21:17.596 } 00:21:17.596 ] 00:21:17.596 }, 00:21:17.596 { 00:21:17.596 "subsystem": "sock", 00:21:17.596 "config": [ 00:21:17.596 { 00:21:17.596 "method": "sock_impl_set_options", 00:21:17.596 "params": { 00:21:17.596 "impl_name": "uring", 00:21:17.596 "recv_buf_size": 2097152, 00:21:17.596 "send_buf_size": 2097152, 00:21:17.596 "enable_recv_pipe": true, 00:21:17.597 "enable_quickack": false, 00:21:17.597 "enable_placement_id": 0, 00:21:17.597 "enable_zerocopy_send_server": false, 00:21:17.597 "enable_zerocopy_send_client": false, 00:21:17.597 "zerocopy_threshold": 0, 00:21:17.597 "tls_version": 0, 00:21:17.597 "enable_ktls": false 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "sock_impl_set_options", 00:21:17.597 "params": { 00:21:17.597 "impl_name": "posix", 00:21:17.597 "recv_buf_size": 2097152, 00:21:17.597 "send_buf_size": 2097152, 00:21:17.597 "enable_recv_pipe": true, 00:21:17.597 "enable_quickack": false, 00:21:17.597 "enable_placement_id": 0, 00:21:17.597 "enable_zerocopy_send_server": true, 00:21:17.597 "enable_zerocopy_send_client": false, 00:21:17.597 "zerocopy_threshold": 0, 00:21:17.597 "tls_version": 0, 00:21:17.597 "enable_ktls": false 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "sock_impl_set_options", 00:21:17.597 "params": { 00:21:17.597 "impl_name": "ssl", 00:21:17.597 "recv_buf_size": 4096, 00:21:17.597 "send_buf_size": 4096, 00:21:17.597 "enable_recv_pipe": true, 00:21:17.597 "enable_quickack": false, 00:21:17.597 "enable_placement_id": 0, 00:21:17.597 "enable_zerocopy_send_server": true, 00:21:17.597 "enable_zerocopy_send_client": false, 00:21:17.597 "zerocopy_threshold": 0, 00:21:17.597 "tls_version": 0, 00:21:17.597 "enable_ktls": false 00:21:17.597 } 00:21:17.597 } 00:21:17.597 ] 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "subsystem": "vmd", 00:21:17.597 "config": [] 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "subsystem": "accel", 00:21:17.597 "config": [ 00:21:17.597 { 00:21:17.597 "method": "accel_set_options", 00:21:17.597 "params": { 00:21:17.597 "small_cache_size": 128, 00:21:17.597 "large_cache_size": 16, 00:21:17.597 "task_count": 2048, 00:21:17.597 "sequence_count": 2048, 00:21:17.597 "buf_count": 2048 00:21:17.597 } 00:21:17.597 } 00:21:17.597 ] 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "subsystem": "bdev", 00:21:17.597 "config": [ 00:21:17.597 { 00:21:17.597 "method": "bdev_set_options", 00:21:17.597 "params": { 00:21:17.597 "bdev_io_pool_size": 65535, 00:21:17.597 "bdev_io_cache_size": 256, 00:21:17.597 "bdev_auto_examine": true, 00:21:17.597 "iobuf_small_cache_size": 128, 00:21:17.597 "iobuf_large_cache_size": 16 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_raid_set_options", 00:21:17.597 "params": { 00:21:17.597 "process_window_size_kb": 1024 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_iscsi_set_options", 00:21:17.597 "params": { 00:21:17.597 "timeout_sec": 30 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_nvme_set_options", 00:21:17.597 "params": { 00:21:17.597 "action_on_timeout": "none", 00:21:17.597 "timeout_us": 0, 00:21:17.597 "timeout_admin_us": 0, 00:21:17.597 "keep_alive_timeout_ms": 10000, 00:21:17.597 "arbitration_burst": 0, 00:21:17.597 "low_priority_weight": 0, 00:21:17.597 "medium_priority_weight": 0, 00:21:17.597 "high_priority_weight": 0, 00:21:17.597 "nvme_adminq_poll_period_us": 10000, 00:21:17.597 "nvme_ioq_poll_period_us": 0, 00:21:17.597 "io_queue_requests": 512, 00:21:17.597 "delay_cmd_submit": true, 00:21:17.597 "transport_retry_count": 4, 00:21:17.597 "bdev_retry_count": 3, 00:21:17.597 "transport_ack_timeout": 0, 00:21:17.597 "ctrlr_loss_timeout_sec": 0, 00:21:17.597 "reconnect_delay_sec": 0, 00:21:17.597 "fast_io_fail_timeout_sec": 0, 00:21:17.597 "disable_auto_failback": false, 00:21:17.597 "generate_uuids": false, 00:21:17.597 "transport_tos": 0, 00:21:17.597 "nvme_error_stat": false, 00:21:17.597 "rdma_srq_size": 0, 00:21:17.597 "io_path_stat": false, 00:21:17.597 "allow_accel_sequence": false, 00:21:17.597 "rdma_max_cq_size": 0, 00:21:17.597 "rdma_cm_event_timeout_ms": 0, 00:21:17.597 "dhchap_digests": [ 00:21:17.597 "sha256", 00:21:17.597 "sha384", 00:21:17.597 "sha512" 00:21:17.597 ], 00:21:17.597 "dhchap_dhgroups": [ 00:21:17.597 "null", 00:21:17.597 "ffdhe2048", 00:21:17.597 "ffdhe3072", 00:21:17.597 "ffdhe4096", 00:21:17.597 "ffdhe6144", 00:21:17.597 "ffdhe8192" 00:21:17.597 ] 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_nvme_attach_controller", 00:21:17.597 "params": { 00:21:17.597 "name": "nvme0", 00:21:17.597 "trtype": "TCP", 00:21:17.597 "adrfam": "IPv4", 00:21:17.597 "traddr": "127.0.0.1", 00:21:17.597 "trsvcid": "4420", 00:21:17.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.597 "prchk_reftag": false, 00:21:17.597 "prchk_guard": false, 00:21:17.597 "ctrlr_loss_timeout_sec": 0, 00:21:17.597 "reconnect_delay_sec": 0, 00:21:17.597 "fast_io_fail_timeout_sec": 0, 00:21:17.597 "psk": "key0", 00:21:17.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.597 "hdgst": false, 00:21:17.597 "ddgst": false 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_nvme_set_hotplug", 00:21:17.597 "params": { 00:21:17.597 "period_us": 100000, 00:21:17.597 "enable": false 00:21:17.597 } 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "method": "bdev_wait_for_examine" 00:21:17.597 } 00:21:17.597 ] 00:21:17.597 }, 00:21:17.597 { 00:21:17.597 "subsystem": "nbd", 00:21:17.597 "config": [] 00:21:17.597 } 00:21:17.597 ] 00:21:17.597 }' 00:21:17.597 03:05:56 -- keyring/file.sh@114 -- # killprocess 97043 00:21:17.597 03:05:56 -- common/autotest_common.sh@936 -- # '[' -z 97043 ']' 00:21:17.597 03:05:56 -- common/autotest_common.sh@940 -- # kill -0 97043 00:21:17.597 03:05:56 -- common/autotest_common.sh@941 -- # uname 00:21:17.597 03:05:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:17.597 03:05:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97043 00:21:17.597 killing process with pid 97043 00:21:17.597 Received shutdown signal, test time was about 1.000000 seconds 00:21:17.597 00:21:17.597 Latency(us) 00:21:17.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.597 =================================================================================================================== 00:21:17.597 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.597 03:05:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:17.597 03:05:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:17.597 03:05:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97043' 00:21:17.597 03:05:56 -- common/autotest_common.sh@955 -- # kill 97043 00:21:17.597 03:05:56 -- common/autotest_common.sh@960 -- # wait 97043 00:21:17.597 03:05:56 -- keyring/file.sh@117 -- # bperfpid=97286 00:21:17.597 03:05:56 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:17.597 03:05:56 -- keyring/file.sh@119 -- # waitforlisten 97286 /var/tmp/bperf.sock 00:21:17.597 03:05:56 -- common/autotest_common.sh@817 -- # '[' -z 97286 ']' 00:21:17.597 03:05:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:17.597 03:05:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:17.597 03:05:56 -- keyring/file.sh@115 -- # echo '{ 00:21:17.597 "subsystems": [ 00:21:17.597 { 00:21:17.597 "subsystem": "keyring", 00:21:17.597 "config": [ 00:21:17.597 { 00:21:17.597 "method": "keyring_file_add_key", 00:21:17.597 "params": { 00:21:17.598 "name": "key0", 00:21:17.598 "path": "/tmp/tmp.WNQiR4VTzP" 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "keyring_file_add_key", 00:21:17.598 "params": { 00:21:17.598 "name": "key1", 00:21:17.598 "path": "/tmp/tmp.P9WyrFZehC" 00:21:17.598 } 00:21:17.598 } 00:21:17.598 ] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "iobuf", 00:21:17.598 "config": [ 00:21:17.598 { 00:21:17.598 "method": "iobuf_set_options", 00:21:17.598 "params": { 00:21:17.598 "small_pool_count": 8192, 00:21:17.598 "large_pool_count": 1024, 00:21:17.598 "small_bufsize": 8192, 00:21:17.598 "large_bufsize": 135168 00:21:17.598 } 00:21:17.598 } 00:21:17.598 ] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "sock", 00:21:17.598 "config": [ 00:21:17.598 { 00:21:17.598 "method": "sock_impl_set_options", 00:21:17.598 "params": { 00:21:17.598 "impl_name": "uring", 00:21:17.598 "recv_buf_size": 2097152, 00:21:17.598 "send_buf_size": 2097152, 00:21:17.598 "enable_recv_pipe": true, 00:21:17.598 "enable_quickack": false, 00:21:17.598 "enable_placement_id": 0, 00:21:17.598 "enable_zerocopy_send_server": false, 00:21:17.598 "enable_zerocopy_send_client": false, 00:21:17.598 "zerocopy_threshold": 0, 00:21:17.598 "tls_version": 0, 00:21:17.598 "enable_ktls": false 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "sock_impl_set_options", 00:21:17.598 "params": { 00:21:17.598 "impl_name": "posix", 00:21:17.598 "recv_buf_size": 2097152, 00:21:17.598 "send_buf_size": 2097152, 00:21:17.598 "enable_recv_pipe": true, 00:21:17.598 "enable_quickack": false, 00:21:17.598 "enable_placement_id": 0, 00:21:17.598 "enable_zerocopy_send_server": true, 00:21:17.598 "enable_zerocopy_send_client": false, 00:21:17.598 "zerocopy_threshold": 0, 00:21:17.598 "tls_version": 0, 00:21:17.598 "enable_ktls": false 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "sock_impl_set_options", 00:21:17.598 "params": { 00:21:17.598 "impl_name": "ssl", 00:21:17.598 "recv_buf_size": 4096, 00:21:17.598 "send_buf_size": 4096, 00:21:17.598 "enable_recv_pipe": true, 00:21:17.598 "enable_quickack": false, 00:21:17.598 "enable_placement_id": 0, 00:21:17.598 "enable_zerocopy_send_server": true, 00:21:17.598 "enable_zerocopy_send_client": false, 00:21:17.598 "zerocopy_threshold": 0, 00:21:17.598 "tls_version": 0, 00:21:17.598 "enable_ktls": false 00:21:17.598 } 00:21:17.598 } 00:21:17.598 ] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "vmd", 00:21:17.598 "config": [] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "accel", 00:21:17.598 "config": [ 00:21:17.598 { 00:21:17.598 "method": "accel_set_options", 00:21:17.598 "params": { 00:21:17.598 "small_cache_size": 128, 00:21:17.598 "large_cache_size": 16, 00:21:17.598 "task_count": 2048, 00:21:17.598 "sequence_count": 2048, 00:21:17.598 "buf_count": 2048 00:21:17.598 } 00:21:17.598 } 00:21:17.598 ] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "bdev", 00:21:17.598 "config": [ 00:21:17.598 { 00:21:17.598 "method": "bdev_set_options", 00:21:17.598 "params": { 00:21:17.598 "bdev_io_pool_size": 65535, 00:21:17.598 "bdev_io_cache_size": 256, 00:21:17.598 "bdev_auto_examine": true, 00:21:17.598 "iobuf_small_cache_size": 128, 00:21:17.598 "iobuf_large_cache_size": 16 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_raid_set_options", 00:21:17.598 "params": { 00:21:17.598 "process_window_size_kb": 1024 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_iscsi_set_options", 00:21:17.598 "params": { 00:21:17.598 "timeout_sec": 30 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_nvme_set_options", 00:21:17.598 "params": { 00:21:17.598 "action_on_timeout": "none", 00:21:17.598 "timeout_us": 0, 00:21:17.598 "timeout_admin_us": 0, 00:21:17.598 "keep_alive_timeout_ms": 10000, 00:21:17.598 "arbitration_burst": 0, 00:21:17.598 "low_priority_weight": 0, 00:21:17.598 "medium_priority_weight": 0, 00:21:17.598 "high_priority_weight": 0, 00:21:17.598 "nvme_adminq_poll_period_us": 10000, 00:21:17.598 "nvme_ioq_poll_period_us": 0, 00:21:17.598 "io_queue_requests": 512, 00:21:17.598 "delay_cmd_submit": true, 00:21:17.598 "transport_retry_count": 4, 00:21:17.598 "bdev_retry_count": 3, 00:21:17.598 "transport_ack_timeout": 0, 00:21:17.598 "ctrlr_loss_timeout_sec": 0, 00:21:17.598 "reconnect_delay_sec": 0, 00:21:17.598 "fast_io_fail_timeout_sec": 0, 00:21:17.598 "disable_auto_failback": false, 00:21:17.598 "generate_uuids": false, 00:21:17.598 "transport_tos": 0, 00:21:17.598 "nvme_error_stat": false, 00:21:17.598 "rdma_srq_size": 0, 00:21:17.598 "io_path_stat": false, 00:21:17.598 "allow_accel_sequence": false, 00:21:17.598 "rdma_max_cq_size": 0, 00:21:17.598 "rdma_cm_event_timeout_ms": 0, 00:21:17.598 "dhchap_digests": [ 00:21:17.598 "sha256", 00:21:17.598 "sha384", 00:21:17.598 "sha512" 00:21:17.598 ], 00:21:17.598 "dhchap_dhgroups": [ 00:21:17.598 "null", 00:21:17.598 "ffdhe2048", 00:21:17.598 "ffdhe3072", 00:21:17.598 "ffdhe4096", 00:21:17.598 "ffdhe6144", 00:21:17.598 "ffdhe8192" 00:21:17.598 ] 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_nvme_attach_controller", 00:21:17.598 "params": { 00:21:17.598 "name": "nvme0", 00:21:17.598 "trtype": "TCP", 00:21:17.598 "adrfam": "IPv4", 00:21:17.598 "traddr": "127.0.0.1", 00:21:17.598 "trsvcid": "4420", 00:21:17.598 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:17.598 "prchk_reftag": false, 00:21:17.598 "prchk_guard": false, 00:21:17.598 "ctrlr_loss_timeout_sec": 0, 00:21:17.598 "reconnect_delay_sec": 0, 00:21:17.598 "fast_io_fail_timeout_sec": 0, 00:21:17.598 "psk": "key0", 00:21:17.598 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:17.598 "hdgst": false, 00:21:17.598 "ddgst": false 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_nvme_set_hotplug", 00:21:17.598 "params": { 00:21:17.598 "period_us": 100000, 00:21:17.598 "enable": false 00:21:17.598 } 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "method": "bdev_wait_for_examine" 00:21:17.598 } 00:21:17.598 ] 00:21:17.598 }, 00:21:17.598 { 00:21:17.598 "subsystem": "nbd", 00:21:17.599 "config": [] 00:21:17.599 } 00:21:17.599 ] 00:21:17.599 }' 00:21:17.599 03:05:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:17.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:17.599 03:05:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:17.599 03:05:56 -- common/autotest_common.sh@10 -- # set +x 00:21:17.599 [2024-04-23 03:05:56.747642] Starting SPDK v24.05-pre git sha1 a1264177c / DPDK 24.07.0-rc0 initialization... 00:21:17.599 [2024-04-23 03:05:56.747899] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97286 ] 00:21:17.857 [2024-04-23 03:05:56.869624] pci_dpdk.c: 37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.07.0-rc0 is used. There is no support for it in SPDK. Enabled only for validation. 00:21:17.857 [2024-04-23 03:05:56.889075] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.857 [2024-04-23 03:05:56.925631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:18.116 [2024-04-23 03:05:57.066434] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.683 03:05:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:18.683 03:05:57 -- common/autotest_common.sh@850 -- # return 0 00:21:18.683 03:05:57 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:21:18.683 03:05:57 -- keyring/file.sh@120 -- # jq length 00:21:18.683 03:05:57 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.941 03:05:58 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:21:18.941 03:05:58 -- keyring/file.sh@121 -- # get_refcnt key0 00:21:18.941 03:05:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:18.941 03:05:58 -- keyring/common.sh@12 -- # get_key key0 00:21:18.941 03:05:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:18.941 03:05:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:18.941 03:05:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:19.199 03:05:58 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:19.199 03:05:58 -- keyring/file.sh@122 -- # get_refcnt key1 00:21:19.199 03:05:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:19.199 03:05:58 -- keyring/common.sh@12 -- # get_key key1 00:21:19.199 03:05:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:19.199 03:05:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:19.199 03:05:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:19.457 03:05:58 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:21:19.457 03:05:58 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:21:19.457 03:05:58 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:21:19.457 03:05:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:19.716 03:05:58 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:21:19.716 03:05:58 -- keyring/file.sh@1 -- # cleanup 00:21:19.716 03:05:58 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.WNQiR4VTzP /tmp/tmp.P9WyrFZehC 00:21:19.716 03:05:58 -- keyring/file.sh@20 -- # killprocess 97286 00:21:19.716 03:05:58 -- common/autotest_common.sh@936 -- # '[' -z 97286 ']' 00:21:19.716 03:05:58 -- common/autotest_common.sh@940 -- # kill -0 97286 00:21:19.716 03:05:58 -- common/autotest_common.sh@941 -- # uname 00:21:19.716 03:05:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.716 03:05:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97286 00:21:19.975 killing process with pid 97286 00:21:19.975 Received shutdown signal, test time was about 1.000000 seconds 00:21:19.975 00:21:19.975 Latency(us) 00:21:19.975 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.975 =================================================================================================================== 00:21:19.975 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.975 03:05:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:19.975 03:05:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:19.975 03:05:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97286' 00:21:19.975 03:05:58 -- common/autotest_common.sh@955 -- # kill 97286 00:21:19.975 03:05:58 -- common/autotest_common.sh@960 -- # wait 97286 00:21:19.975 03:05:59 -- keyring/file.sh@21 -- # killprocess 97035 00:21:19.975 03:05:59 -- common/autotest_common.sh@936 -- # '[' -z 97035 ']' 00:21:19.975 03:05:59 -- common/autotest_common.sh@940 -- # kill -0 97035 00:21:19.975 03:05:59 -- common/autotest_common.sh@941 -- # uname 00:21:19.975 03:05:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.975 03:05:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 97035 00:21:19.975 killing process with pid 97035 00:21:19.975 03:05:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:19.975 03:05:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:19.975 03:05:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 97035' 00:21:19.975 03:05:59 -- common/autotest_common.sh@955 -- # kill 97035 00:21:19.975 [2024-04-23 03:05:59.061067] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:19.975 03:05:59 -- common/autotest_common.sh@960 -- # wait 97035 00:21:20.235 ************************************ 00:21:20.235 END TEST keyring_file 00:21:20.235 ************************************ 00:21:20.235 00:21:20.235 real 0m14.505s 00:21:20.235 user 0m37.593s 00:21:20.235 sys 0m2.849s 00:21:20.235 03:05:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:20.235 03:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 03:05:59 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:21:20.235 03:05:59 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:21:20.235 03:05:59 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:21:20.235 03:05:59 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:21:20.235 03:05:59 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:21:20.235 03:05:59 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:21:20.235 03:05:59 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:21:20.235 03:05:59 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:21:20.235 03:05:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:20.235 03:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:20.235 03:05:59 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:21:20.235 03:05:59 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:21:20.235 03:05:59 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:21:20.235 03:05:59 -- common/autotest_common.sh@10 -- # set +x 00:21:22.138 INFO: APP EXITING 00:21:22.138 INFO: killing all VMs 00:21:22.138 INFO: killing vhost app 00:21:22.138 INFO: EXIT DONE 00:21:22.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:22.706 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:22.706 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:23.643 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:23.643 Cleaning 00:21:23.643 Removing: /var/run/dpdk/spdk0/config 00:21:23.643 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:23.643 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:23.643 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:23.643 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:23.643 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:23.643 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:23.643 Removing: /var/run/dpdk/spdk1/config 00:21:23.643 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:23.643 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:23.643 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:23.643 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:23.643 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:23.643 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:23.643 Removing: /var/run/dpdk/spdk2/config 00:21:23.643 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:23.643 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:23.643 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:23.643 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:23.643 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:23.643 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:23.643 Removing: /var/run/dpdk/spdk3/config 00:21:23.643 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:23.643 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:23.643 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:23.643 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:23.643 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:23.643 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:23.643 Removing: /var/run/dpdk/spdk4/config 00:21:23.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:23.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:23.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:23.643 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:23.643 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:23.643 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:23.643 Removing: /dev/shm/nvmf_trace.0 00:21:23.643 Removing: /dev/shm/spdk_tgt_trace.pid72138 00:21:23.643 Removing: /var/run/dpdk/spdk0 00:21:23.643 Removing: /var/run/dpdk/spdk1 00:21:23.643 Removing: /var/run/dpdk/spdk2 00:21:23.643 Removing: /var/run/dpdk/spdk3 00:21:23.643 Removing: /var/run/dpdk/spdk4 00:21:23.643 Removing: /var/run/dpdk/spdk_pid71975 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72138 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72363 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72448 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72475 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72593 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72598 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72727 00:21:23.643 Removing: /var/run/dpdk/spdk_pid72923 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73063 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73138 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73206 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73289 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73364 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73407 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73446 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73507 00:21:23.643 Removing: /var/run/dpdk/spdk_pid73608 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74039 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74082 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74137 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74153 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74214 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74228 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74289 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74303 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74347 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74365 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74409 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74421 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74552 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74586 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74671 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74718 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74746 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74819 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74857 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74897 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74935 00:21:23.643 Removing: /var/run/dpdk/spdk_pid74968 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75007 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75045 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75078 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75117 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75156 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75195 00:21:23.643 Removing: /var/run/dpdk/spdk_pid75229 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75268 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75307 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75340 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75384 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75417 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75464 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75500 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75533 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75579 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75643 00:21:23.900 Removing: /var/run/dpdk/spdk_pid75734 00:21:23.900 Removing: /var/run/dpdk/spdk_pid76055 00:21:23.900 Removing: /var/run/dpdk/spdk_pid76071 00:21:23.900 Removing: /var/run/dpdk/spdk_pid76106 00:21:23.900 Removing: /var/run/dpdk/spdk_pid76125 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76135 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76154 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76173 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76184 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76203 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76217 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76232 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76251 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76267 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76282 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76296 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76309 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76325 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76344 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76357 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76373 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76408 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76422 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76451 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76520 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76552 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76562 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76595 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76605 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76607 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76654 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76668 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76700 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76710 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76714 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76723 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76733 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76737 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76746 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76756 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76783 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76819 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76823 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76861 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76865 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76871 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76917 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76923 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76959 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76961 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76974 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76976 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76978 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76991 00:21:23.901 Removing: /var/run/dpdk/spdk_pid76993 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77005 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77079 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77121 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77231 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77275 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77318 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77332 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77349 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77363 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77395 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77410 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77489 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77505 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77544 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77605 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77650 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77688 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77783 00:21:23.901 Removing: /var/run/dpdk/spdk_pid77835 00:21:24.165 Removing: /var/run/dpdk/spdk_pid77866 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78126 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78235 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78273 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78591 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78625 00:21:24.165 Removing: /var/run/dpdk/spdk_pid78927 00:21:24.165 Removing: /var/run/dpdk/spdk_pid79325 00:21:24.165 Removing: /var/run/dpdk/spdk_pid79585 00:21:24.165 Removing: /var/run/dpdk/spdk_pid80338 00:21:24.165 Removing: /var/run/dpdk/spdk_pid81151 00:21:24.165 Removing: /var/run/dpdk/spdk_pid81267 00:21:24.165 Removing: /var/run/dpdk/spdk_pid81329 00:21:24.165 Removing: /var/run/dpdk/spdk_pid82585 00:21:24.165 Removing: /var/run/dpdk/spdk_pid82796 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83091 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83204 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83330 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83352 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83380 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83402 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83496 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83617 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83741 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83809 00:21:24.165 Removing: /var/run/dpdk/spdk_pid83990 00:21:24.165 Removing: /var/run/dpdk/spdk_pid84051 00:21:24.165 Removing: /var/run/dpdk/spdk_pid84144 00:21:24.165 Removing: /var/run/dpdk/spdk_pid84451 00:21:24.165 Removing: /var/run/dpdk/spdk_pid84780 00:21:24.165 Removing: /var/run/dpdk/spdk_pid84792 00:21:24.165 Removing: /var/run/dpdk/spdk_pid86976 00:21:24.165 Removing: /var/run/dpdk/spdk_pid86983 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87253 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87267 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87285 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87317 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87322 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87407 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87415 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87524 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87526 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87634 00:21:24.165 Removing: /var/run/dpdk/spdk_pid87640 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88014 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88057 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88136 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88195 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88482 00:21:24.165 Removing: /var/run/dpdk/spdk_pid88676 00:21:24.165 Removing: /var/run/dpdk/spdk_pid89059 00:21:24.165 Removing: /var/run/dpdk/spdk_pid89533 00:21:24.165 Removing: /var/run/dpdk/spdk_pid90119 00:21:24.165 Removing: /var/run/dpdk/spdk_pid90131 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92064 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92111 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92164 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92211 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92315 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92368 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92415 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92467 00:21:24.165 Removing: /var/run/dpdk/spdk_pid92769 00:21:24.165 Removing: /var/run/dpdk/spdk_pid93921 00:21:24.165 Removing: /var/run/dpdk/spdk_pid94066 00:21:24.165 Removing: /var/run/dpdk/spdk_pid94298 00:21:24.165 Removing: /var/run/dpdk/spdk_pid94840 00:21:24.165 Removing: /var/run/dpdk/spdk_pid95003 00:21:24.165 Removing: /var/run/dpdk/spdk_pid95164 00:21:24.165 Removing: /var/run/dpdk/spdk_pid95261 00:21:24.165 Removing: /var/run/dpdk/spdk_pid95435 00:21:24.165 Removing: /var/run/dpdk/spdk_pid95550 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96206 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96240 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96271 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96532 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96567 00:21:24.165 Removing: /var/run/dpdk/spdk_pid96597 00:21:24.165 Removing: /var/run/dpdk/spdk_pid97035 00:21:24.165 Removing: /var/run/dpdk/spdk_pid97043 00:21:24.165 Removing: /var/run/dpdk/spdk_pid97286 00:21:24.165 Clean 00:21:24.427 03:06:03 -- common/autotest_common.sh@1437 -- # return 0 00:21:24.427 03:06:03 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:21:24.427 03:06:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:24.427 03:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.427 03:06:03 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:21:24.427 03:06:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:24.427 03:06:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.427 03:06:03 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:24.427 03:06:03 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:24.427 03:06:03 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:24.427 03:06:03 -- spdk/autotest.sh@389 -- # hash lcov 00:21:24.427 03:06:03 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:21:24.427 03:06:03 -- spdk/autotest.sh@391 -- # hostname 00:21:24.427 03:06:03 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:24.684 geninfo: WARNING: invalid characters removed from testname! 00:21:56.769 03:06:30 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.769 03:06:34 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:57.705 03:06:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:00.261 03:06:39 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:03.547 03:06:42 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:06.081 03:06:45 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:09.368 03:06:48 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:09.368 03:06:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:09.368 03:06:48 -- scripts/common.sh@502 -- $ [[ -e /bin/wpdk_common.sh ]] 00:22:09.368 03:06:48 -- scripts/common.sh@510 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:09.368 03:06:48 -- scripts/common.sh@511 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:09.368 03:06:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.368 03:06:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.368 03:06:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.368 03:06:48 -- paths/export.sh@5 -- $ export PATH 00:22:09.368 03:06:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:09.368 03:06:48 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:22:09.368 03:06:48 -- common/autobuild_common.sh@435 -- $ date +%s 00:22:09.368 03:06:48 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713841608.XXXXXX 00:22:09.368 03:06:48 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713841608.mwypRV 00:22:09.368 03:06:48 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:22:09.368 03:06:48 -- common/autobuild_common.sh@441 -- $ '[' -n main ']' 00:22:09.368 03:06:48 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:22:09.368 03:06:48 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:22:09.369 03:06:48 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:22:09.369 03:06:48 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:22:09.369 03:06:48 -- common/autobuild_common.sh@451 -- $ get_config_params 00:22:09.369 03:06:48 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:22:09.369 03:06:48 -- common/autotest_common.sh@10 -- $ set +x 00:22:09.369 03:06:48 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:22:09.369 03:06:48 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:22:09.369 03:06:48 -- pm/common@17 -- $ local monitor 00:22:09.369 03:06:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.369 03:06:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=99044 00:22:09.369 03:06:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:09.369 03:06:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=99046 00:22:09.369 03:06:48 -- pm/common@26 -- $ sleep 1 00:22:09.369 03:06:48 -- pm/common@21 -- $ date +%s 00:22:09.369 03:06:48 -- pm/common@21 -- $ date +%s 00:22:09.369 03:06:48 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713841608 00:22:09.369 03:06:48 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1713841608 00:22:09.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713841608_collect-vmstat.pm.log 00:22:09.369 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1713841608_collect-cpu-load.pm.log 00:22:10.329 03:06:49 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:22:10.329 03:06:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:22:10.329 03:06:49 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:22:10.329 03:06:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:22:10.329 03:06:49 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:22:10.329 03:06:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:22:10.329 03:06:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:22:10.329 03:06:49 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:10.329 03:06:49 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:22:10.329 03:06:49 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:10.329 03:06:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:22:10.329 03:06:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:22:10.329 03:06:49 -- pm/common@30 -- $ signal_monitor_resources TERM 00:22:10.329 03:06:49 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:22:10.329 03:06:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.329 03:06:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:22:10.329 03:06:49 -- pm/common@45 -- $ pid=99052 00:22:10.329 03:06:49 -- pm/common@52 -- $ sudo kill -TERM 99052 00:22:10.588 03:06:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:22:10.588 03:06:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:22:10.588 03:06:49 -- pm/common@45 -- $ pid=99053 00:22:10.588 03:06:49 -- pm/common@52 -- $ sudo kill -TERM 99053 00:22:10.588 + [[ -n 5889 ]] 00:22:10.588 + sudo kill 5889 00:22:10.598 [Pipeline] } 00:22:10.615 [Pipeline] // timeout 00:22:10.620 [Pipeline] } 00:22:10.638 [Pipeline] // stage 00:22:10.643 [Pipeline] } 00:22:10.659 [Pipeline] // catchError 00:22:10.667 [Pipeline] stage 00:22:10.670 [Pipeline] { (Stop VM) 00:22:10.681 [Pipeline] sh 00:22:10.962 + vagrant halt 00:22:16.235 ==> default: Halting domain... 00:22:21.517 [Pipeline] sh 00:22:21.798 + vagrant destroy -f 00:22:25.989 ==> default: Removing domain... 00:22:26.001 [Pipeline] sh 00:22:26.282 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:26.291 [Pipeline] } 00:22:26.311 [Pipeline] // stage 00:22:26.317 [Pipeline] } 00:22:26.334 [Pipeline] // dir 00:22:26.340 [Pipeline] } 00:22:26.357 [Pipeline] // wrap 00:22:26.364 [Pipeline] } 00:22:26.379 [Pipeline] // catchError 00:22:26.389 [Pipeline] stage 00:22:26.391 [Pipeline] { (Epilogue) 00:22:26.407 [Pipeline] sh 00:22:26.690 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:33.284 [Pipeline] catchError 00:22:33.286 [Pipeline] { 00:22:33.303 [Pipeline] sh 00:22:33.584 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:33.843 Artifacts sizes are good 00:22:33.853 [Pipeline] } 00:22:33.869 [Pipeline] // catchError 00:22:33.880 [Pipeline] archiveArtifacts 00:22:33.887 Archiving artifacts 00:22:34.053 [Pipeline] cleanWs 00:22:34.066 [WS-CLEANUP] Deleting project workspace... 00:22:34.066 [WS-CLEANUP] Deferred wipeout is used... 00:22:34.073 [WS-CLEANUP] done 00:22:34.075 [Pipeline] } 00:22:34.093 [Pipeline] // stage 00:22:34.100 [Pipeline] } 00:22:34.117 [Pipeline] // node 00:22:34.122 [Pipeline] End of Pipeline 00:22:34.172 Finished: SUCCESS